Universita¨t Passau Fakulta¨t fu¨r Informatik und Mathematik Lehrstuhl fu ¨ r Programmierung Prof. Christian Lengauer, Ph.D.
Diplomarbeit Metadata Driven Restructuring of Distributed Component Based Applications Michael H¨ausler
Datum: November 24, 2006 Aufgabensteller: Prof. Christian Lengauer, Ph.D. Betreuer: Dipl.-Inf. Armin Gr¨oßlinger
ii
Eidesstattliche Erkl¨ arung Hiermit erkl¨are ich, dass ich diese Diplomarbeit selbst¨andig angefertigt und keine anderen als die angegebenen Quellen und Hilfsmittel benutzt habe. Alle w¨ortlich oder sinngem¨aß u uhrungen wurden als solche ge¨bernommenen Ausf¨ kennzeichnet. Weiterhin erkl¨are ich, dass ich diese Arbeit in gleicher oder ¨ahnliche Form nicht bereits einer anderen Pr¨ ufungsbeh¨orde vorgelegt habe. Passau, 24. November 2006
.................................... (Michael H¨ ausler)
Supervisor Contacts Prof. Christian Lengauer, Ph.D. Lehrstuhl f¨ ur Programmierung Universit¨at Passau EMail: mailto:
[email protected] Web: http://www.uni-passau.de/lengauer/ Dipl.-Inf. Armin Gr¨oßlinger Lehrstuhl f¨ ur Programmierung Universit¨at Passau EMail: mailto:
[email protected] Web: http://www.uni-passau.de/lengauer/
iii
Abstract Metadata Driven Restructuring of Distributed Component Based Applications
In Grid computing, applications are often long-running and have to cope with a highly dynamic runtime environment. This necessitates the design of adaptive programs. Many frameworks exist to alleviate the development of applications that can be dynamically modified at runtime. But usually the actual adaptations have to be determined by the end-user or must be anticipated by the developer for a self-adaptive solution. We explore the use of component metadata in a generic optimizer to automatically generate adaptations that are appropriate for the current runtime context. We use the component model Fractal [BCL+ 04] as a basis for adaptive applications. A cost model for the execution time and communication costs of the component’s interfaces allow the optimisation of component distribution on the available nodes. Finding an optimal distribution is NP-hard. We describe a heuristics based on physical interpretation of the problem. Restructuring patterns permit the replacement of adjacent components on a single node with implementations that exploit locality. Redistribution and local restructuring are per se conflicting optimisations. We describe a unified approach for global optimisation.
iv
CONTENTS
Contents 1 Introduction
1
2 Component Models as a Foundation for Adaptivity 2.1 Fractal . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 The Fractal Component Model . . . . . . . . 2.1.2 Fractal’s Java Reference Implementation Julia 2.2 ProActive . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Extensions to the Fractal Component Model . . . . . 2.3.1 State Management . . . . . . . . . . . . . . . 2.3.2 Profiling . . . . . . . . . . . . . . . . . . . . . 2.3.3 Metadata . . . . . . . . . . . . . . . . . . . . 3 Implementation Strategies 3.1 Dynaco/AFPAC . . . . 3.2 PLASMA . . . . . . . . 3.3 SAFRAN . . . . . . . . 3.4 Comparison . . . . . . .
for Self-Adaptivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4 A Generic Approach to Self-Adaptivity 4.1 Motivation . . . . . . . . . . . . . . . . . 4.1.1 Redistribution . . . . . . . . . . . 4.1.2 Restructuring . . . . . . . . . . . 4.2 Redistribution . . . . . . . . . . . . . . . 4.2.1 Notation . . . . . . . . . . . . . . 4.2.2 Resources . . . . . . . . . . . . . 4.2.3 Communication . . . . . . . . . . 4.2.4 Physical interpretation . . . . . . 4.3 Restructuring . . . . . . . . . . . . . . . 4.4 Unified approach . . . . . . . . . . . . . 5 Conclusions Bibliography
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . .
. . . . . . . . . .
. . . .
. . . . . . . . . .
. . . .
. . . . . . . . . .
. . . . . . . .
. . . .
. . . . . . . . . .
. . . . . . . .
. . . .
. . . . . . . . . .
. . . . . . . .
. . . .
. . . . . . . . . .
. . . . . . . .
. . . .
. . . . . . . . . .
. . . . . . . .
. . . .
. . . . . . . . . .
. . . . . . . .
. . . .
. . . . . . . . . .
. . . . . . . .
3 5 5 8 10 14 14 15 16
. . . .
17 20 21 23 25
. . . . . . . . . .
28 28 28 29 31 31 32 36 39 42 44 46 50
1
Chapter 1 Introduction The needs for application adaptivity are manifold. Providers of critical services must ensure availability even whilst reconfiguration. Developers of large systems with high startup costs want to avoid a restart just to do another test with a slightly altered configuration. Researchers in the field of ubiquitous computing are developing numerous approaches to context-awareness [BDR06]. And of course, adaptivity is a key property for applications in Grid computing that must cope with a highly dynamic runtime environment, where heterogeneous nodes have fluctuating system loads and network connections can fail. As it is a tedious and expensive process to implement a reconfigurable application manually, there are frameworks that ease the development of adaptive software being composed of flexible components. Of course, different component models focus on diverse goals like interoperability (e.g., the CORBA component model) or transparent integration of non-functional aspects such as persistence and distribution (e.g., Enterprise JavaBeans). However, many component models also address static and dynamic configuration. Chapter 2 presents Fractal [BCS02], a modular and extensible component model and its Java reference implementation Julia [BCL+ 04]. Fractal features high expressiveness, that allows truthful modelling, and offers several levels of control for reconfiguration at runtime. We also introduce the Fractal implementation ProActive [BBC+ 06] which focuses on Grid computing and provides high-level mechanisms for network communication and component migration. Normally, a system is deployed with a configuration that is suited to the needs and the environment state which can be foreseen at deployment time. As both the demands on the system and the environment evolve, dynamic reconfiguration allows to change to a new configuration that is better adapted to the current runtime context. Of course, the question is how to generate such a configuration. Traditionally, developers try to anticipate the possible contexts their applications can be confronted with and determine appropriate adaptation strategies a priori. The implementation strategies for such adaptations differ widely: from ad-hoc techniques to sophisticated frameworks. Chapter 3 shows the design questions
2
Chapter 1 – Introduction
that developers of self-adaptive solutions must answer (namely goals, adaptivity, activation, selection, and implementation). We compare how three frameworks for self-adaptivity handle these problems. Dynaco/AFPAC [ABP04, BAP06] focuses on intra-component and intra-operation adaptivity for parallel components. Plasma [LH05] is a Fractal based framework for the development of self-adaptive streaming multimedia applications. SAFRAN [DL05a, Dav05] provides domainspecific languages to implement self-adaptivity for Fractal components. None of these systems provides automatic selection of adaptations. We explore generic optimisation in Chapter 4. We focus on two types of adaptivity: redistribution and local restructuring. We provide cost models for resource utilisation and communication. These lead to NP-hard optimisation problems for redistribution. We suggest a heuristics that is based on a physical interpretation for the special case of homogeneous processors in a single network segment. Furthermore, we present a formal foundation for pattern based restructuring. Redistribution and local restructuring are conflicting optimisations. The introduction of a canonical architecture allows a unified perception. In Chapter 5 we draw the conclusion. Cost models and architectural patterns allow reasoning about an application’s behaviour and structure. This use of metadata instead of actual adaptations allows to cope with unanticipated compositions and execution contexts. We identify areas of future research, mainly the search after heuristics for redistribution on heterogeneous processors and the use of restructuring patterns in behaviour-driven and domain-specific restructuring.
3
Chapter 2 Component Models as a Foundation for Adaptivity The term component is highly overloaded. To make things worse, there is also confusion about the distinction and connection between the terms “object” and “component” [SGM02, pp. 35–36]. The WCOP’96 report sees them closely related and gives its classic definition of a component: “Component-oriented programming (COP) has recently been described as the natural extension of OOP to cater for the special needs of independently extensible systems.” [SP97, p. 127] “A component is a unit of composition with contractually specified interfaces and explicit context dependencies only. Components can be deployed independently and are subject to composition by third parties.” [SP97, p. 130] On this note, a component is characterised by a development style that takes special care for independence. The component’s interfaces and requirements have to be stipulated clearly and precisely to increase its reusability by third parties. These desiderata call for high cohesion and weak coupling. But these properties are the classic qualities of modularisation and are also found in good objectoriented designs. They are therefore no distinguishing marks for components. Szyperski gives another aspect of components in his book “Component Software”: “A component is a set of normally simultaneously deployed atomic components. This distinction between components and atomic components caters for the fact that most atomic components will never be deployed individually although they could. [. . . ] the atomic units of deployment in Java are class files.” [SGM02, p. 420]
4
Chapter 2 – Component Models as a Foundation for Adaptivity
Components are therefore more coarse-grained than objects, in fact they can contain objects. This additional layer of abstraction is important, because it helps to distinguish between collaboration (‘uses a’-relationship) and composition (‘contains a’-relationship). Assume that objects are only allowed to collaborate whereas components may collaborate and contain objects or other components. This makes the structure of ownership explicit, which was hidden before. It does not only help developers to better understand a system’s architecture. As components cannot be specified without defining their borders, they are in itself metadata about a component based application. Along with their structuring effect components offer a starting-point for the transparent integration of non-functional aspects. Since all communication happens at specified interfaces only, it is feasible to encapsulate a component in an automatically generated proxy component, that handles e.g., life-cycle management, reconfiguration or translation from local to remote invocations. Obviously, proxying at component level introduces less overhead than proxying at object level. So the benefits of components for adaptivity are twofold. They are targets for adaptation at a coarser granularity than objects. Moreover, there are already implementations of component models that provide the underlying mechanisms of reconfiguration and transparent migration of components across the network. Of course, this means that we are restraining ourselves to the grade of adaptivity that is allowed by the component model and by the granularity of the used components. But although the grade of adaptivity is predetermined, the actual adaptations can be generic. There is a trade-off between the development costs for adaptivity and the development costs for adaptations. Unanticipated dynamic adaptations are in fact possible. E.g., Iguana/J [RC02] is an architecture for aspect oriented programming (AOP) that supports dynamic binding at runtime. As it works with plain Java programs, there are no additional costs for adaptivity. But since the adaptations are usually written manually, their costs are pretty high. Given that we want to automate the generation of adaptations, the ideal component model and its implementation support the following requirements: /R1/ Restructuring The component model allows restructuring by unbinding a set of component instances, replacing them, and rebinding the replacement. /R2/ Reflection Restructuring can be triggered by different actors (e.g., as part of the application’s normal mode of operation). Therefore, changes in the application’s structure are observable at least for those parts that are subject to automatic adaptations.
2.1 Fractal
5
/R3/ Life-cycle management It is possible to pause and resume component instances, so that adjacent instances can be safely restructured. /R4/ Distribution The model allows distributed deployments on different network nodes. /R5/ Concurrency To exploit the resources of different nodes, it is possible to express concurrent activities with components. /R6/ Instance migration There are measures to relocate a component instance to another node on the network. /R7/ State management If component instances have observable state, it is possible to extract it, so replacement instances can be configured accordingly. /R8/ Profiling It is possible to count the operations, to measure the execution time of an invocation, and to determine whether a component instance is running, waiting or idle. /R9/ Arbitrary metadata There is support for attaching arbitrary metadata to components. /R10/ Telemetry It is possible to monitor the load of the participating nodes and their network connectivity.
2.1 2.1.1
Fractal The Fractal Component Model
Fractal [BCS02, BCL+ 04] is a component model for hierarchical component based architectures. It features components with an open set of control capabilities. So, components can show different grades of reflection, which enables the developer to make a trade-off between flexibility and performance on the component level. Although there are predefined control capabilities in the model, Fractal allows their extension. It aids the separation of non-functional from functional concerns by inverting the flow of control for them: Components can be configured and deployed by an external entity. Bruneton, Coupaye and Stefani give seven requirements that have driven the development of Fractal [BCS02, Sec. 2]: (i) Encapsulation and Identity Components abstract from their implementation details. They communicate via their explicitly defined interfaces only. Two component instances are distinguishable.
6
Chapter 2 – Component Models as a Foundation for Adaptivity
(ii) Composition It is possible to dynamically compose components. “[. . . ] arbitrary containment relationships between components should be explicitly maintained, enforced and modifiable at run-time.” [l.c.] (iii) Sharing Particularly, containment relationships are flexible enough to allow the truthful modelling of resource dependencies and therefore allow sharing. (iv) Life-cycle “A general system model should support different forms and spans of component life-cycles.” [l.c.] This includes suspension. (v) Activities The model should be dynamic enough to support the modelling of activities. (vi) Control Controller components are flexible. Interception of operation invocations is possible. (vii) Mobility Components and component assemblies should be allowed to move in the containment hierarchy. These desiderata correspond with our own first three requirements. The requirements (ii) composition and (vii) mobility describe a model that is flexible enough to allow our required adaptations (/R1/ restructuring and /R2/ reflection). Requirement (iv) describes our need for life-cycle management (/R3/).
Figure 2.1: Graphical notation introduced by Fractal (see text). There is a graphical notation for the model’s core concepts, which is shown in Figure 2.1. Components can have functional interfaces, which are either of type client or of type server, and non-functional (controller) interfaces. A tee on the right side (see Figure 2.1 a) indicates a client interface, whereas a tee on the left side (see Figure 2.1 b) denotes a server interface. Controller interfaces are shown by a tee on the top side (see Figure 2.1 c). Interfaces are typed (as are components) and named. Thus, one component can have several interfaces of the same type. However, the predefined controller interfaces have predefined names, so there can be at most one controller of each type. Components can be connected by primitive bindings. A primitive binding is a directed connection between a client interface and a server interface. A binding
2.1 Fractal
7
between two interfaces is only allowed, if the server interface is a subtype of the client interface. Furthermore, bindings must respect visibility (see below). Composite bindings can be realised by connector components. Dependent on the grade of reflection, components can be black-box (see Figure 2.1 d) or white-box (see Figure 2.1 e). A component consists of a membrane which implements the non-functional concerns and a content where the functional concerns reside. In white-box components the membrane is visualised as a frame around an inner box which represents the content. Sub-components are drawn in the content area whereas functionality that is implemented directly in a component is usually not shown. This results in a natural representation of the containment hierarchy as an inclusion diagram. The membrane also allows to display the model’s visibility concept. Functional interfaces of a component C are either inner interfaces or (plain) outer interfaces. Inner interfaces are only accessible to direct sub-components of C. Outer interfaces are only accessible to the direct parent-component of C and to direct sub-components of this parent (siblings). As a component moves in the containment hierarchy (possible at runtime), it also leaves and enters different visibility scopes. This visibility concept is orthogonal to the concepts usually found in object oriented programming languages.
Figure 2.2: Demonstration of various Fractal features, in particular sharing and visibility. The fact that a component can be shared and thus be sub-component of different parents extends the applicability of this visibility concept. An example is shown in Figure 2.2. A shared resource can be modeled as a sub-component and therefore can be bound to internal interfaces. This improves encapsulation as
8
Chapter 2 – Component Models as a Foundation for Adaptivity
these interfaces do not have to be exposed. Since a membrane can superimpose a control behaviour on its content (including sub-components), the question arises whether the membrane of component a or the membrane of component b prevails. This is solved by an according control strategy in the parent component. The Fractal specification [BCS04] predefines several control capabilities. They are specified by an interface for each controller (using a pseudo interface definition language (IDL), which is a modified subset of Java). These “low-level” interfaces contain specific methods. They are a descriptions of the functionality of “highlevel” Fractal interfaces. Component introspection The Component interface allows to discover the type of a component and its external Fractal interfaces. Interface introspection The Interface interface allows to discover the name, type and visibility of a given Fractal interface. Attribute control The AttributeController allows to expose configurable properties of components. It is an empty marker interface that indicates the presence of getter and/or setter methods which are named in JavaBeans style. Binding control The BindingController permits the binding of a component’s client interfaces to other components. Content control The ContentController provides for discovery and manipulation of the sub-components of a component. Life-cycle control The LifeCycleController makes it possible to start and stop the execution of a component. The Fractal specification also defines several conformance levels by making some of the controllers mandatory (e.g., at level 1 every component must provide, at least, the Component interface) and by making some of the controllers exclusive for a specific control purpose (e.g., at level 1.1 every component that exposes attributes must do so by providing the AttributeController interface (among other things)).
2.1.2
Fractal’s Java Reference Implementation Julia
In Java every instance of a Fractal component is represented by an object, so the implementation of identity is delegated to the language. Every Java object can be seen as a black box Fractal component with no control capabilities (a base component). As Julia [BCL+ 04] is a pure Java implementation that does not extend the language, any given Fractal component can be implemented with plain old java objects (POJO). In fact, simple components can be realised within
2.1 Fractal
9
a single object, that includes content and membrane. But this is a tedious and error-prone task (e.g., component interfaces are addressed by string identifiers, which must be carefully handled) and it is neither a good separation between functional and non-functional concerns. The Julia framework aims to alleviate the development of component membranes. E.g., a component might want to provide interface introspection for all its Fractal interfaces. A Fractal interface is represented by an object, which provides methods to invoke the Fractal interface’s operations. The representation object for a Fractal interface that supports introspection must implement the Interface interface. Since a Java object cannot implement the same interface several times, a dedicated representation object has to be generated for each and every Fractal interface. Julia also has to allow for extensions in the set of control capabilities. A new controller might want to intercept operations at arbitrary functional interfaces. E.g., a security manager might want to restrict access to some interfaces depending on the privileges of a logged in user. Therefore, Julia has to implement a generic method of interception.
Figure 2.3: An abstract component and a possible implementation in Julia [BCL+ 04, Fig. 1].
Thus, a general membrane consists of several kinds of objects, as shown in Figure 2.3. There are dedicated representation objects for Fractal interfaces (including functional interfaces and controllers), which are shown in white. These representations delegate calls to implementation objects (shown in dark-grey). Since the different controllers may influence each other, their implementations intertwine. They may hook into interceptors (shown in light-grey). Julia provides a way to manually construct a membrane from these building blocks. But components that follow Fractal’s type system and provide component
10
Chapter 2 – Component Models as a Foundation for Adaptivity
introspection as well as interface introspection can be constructed automatically through a GenericFactory.
2.2
ProActive
ProActive [BBC+ 06] features a programming model and runtime environment for parallel and distributed computing. It allows to easily partition the objects of a standard Java program into concurrent subsystems, that may transparently migrate to remote nodes. Though ProActive focuses on Java only, the underlying programming model was also implemented in C++ and Eiffel. ProActive also implements the Fractal component model. ProActive’s core concept is the notion of active objects. In Java, every object may hold a reference to any other object in the JVM and objects are always passed on as reference. A thread may call methods of every object. ProActive allows to partition the object instances into subsystems, that restrict this freedom of reference. In each subsystem one object is designated as active. This active object represents the subsystem. All other objects are called passive. Active objects may be referenced from everywhere (i.e., from active and passive objects), but passive objects may be referenced only within their subsystem (see Figure 2.4). When a passive object is passed as a parameter to an active object in a different subsystem, call by value is used (deep copy). So, there are no shared passives. Each subsystem has a dedicated thread that executes the subsystem’s activities and calls from other subsystems.
Figure 2.4: A typical object graph with active objects [BBC+ 06, Fig. 1].
The benefit of the restriction of reference is that transparency of distribution requires overhead only at active objects. The ProActive implementation allows to create active objects by encapsulating an instance with an automatically generated wrapper. However, passive objects are plain java objects except for one requirement. Passives must be serializable. Active objects reside in named ProActive nodes. A single JVM can host several ProActive nodes and the JVMs that host nodes can be distributed over the network. An active object (together with all passives of its subsystem) can
2.2 ProActive
11
transparently migrate to an arbitrary node. ProActive allows a descriptor based distributed deployment of active objects, so the addresses of the involved machines do not have to be specified in the source code.
Figure 2.5: An asynchronous call to a remote active object results in a transparent future [BBC02, Fig. 1] Remote calls to an active object are generally asynchronous. ProActive features transparent futures, so remote calls return immediately with an automatically generated proxy for the result (see Figure 2.5). This proxy is type compatible with the original result type. As soon as the actual result is computed, it will be delivered to the proxy. Should the caller query the result before it is completed, the execution of the caller blocks (wait by necessity). Sometimes this automatic conversion of synchronous to asynchronous calls is not possible. E.g., if the called method may throw an exception, this exception might reach the caller too late to properly handle it. The caller may already have left a respective try-catch block. In such a case, ProActive just uses synchronous calls. In addition, ProActive provides barriers for explicit synchronisation. ProActive supports typed groups and provides two different representations for groups. On group creation the group type is fixed by specifying a common superclass A for all group members, which can be active as well as passive. The typed group representation is itself instance of the given superclass A [sic!]. It allows access to the group by reifying any received method call on its members and returns the results as another typed group (see Figure 2.6). Therefore, only methods with a reifiable result type may be called on the typed group representation. Depending on the state of the group, the call parameters can be either broadcasted or scattered to the group members. The latter
12
Chapter 2 – Component Models as a Foundation for Adaptivity
Figure 2.6: An asynchronous call to a distributed typed group yields a transparent future to the result group [BBC02, Fig. 4]
requires that the parameters itself are typed groups, whose members can then be dispatched to the members of the callee. The state of a group can be modified through a management representation. Furthermore, groups may be created so that their members are aware of their group neighbourhood. ProActive has been extended [BCM03] to implement the Fractal Component Model. Each ProActive component is based on at least one active object. These components share the abilities of active objects, they can transparently migrate from one node to another. ProActive components can contain several concurrent activities, which themselves can be deployed for distributed execution. To support this, components can specify virtual nodes, that contain their activities. These virtual nodes can then be mapped to real nodes in various fashions (see Figure 2.7). The concurrent activities within a component are shown by an ellipse in the component’s content for each active object. ProActive’s features allow to seamlessly convert a sequential application to a multithreaded one, whose concurrent activities can easily be distributed. Therefore, ProActive satisfies our requirements for /R4/ distribution, /R5/ concurrency, and /R6/ instance migration.
2.2 ProActive
13
Figure 2.7: Different distributions for a set of ProActive components [BCM03, Fig. 2]
14
2.3
Chapter 2 – Component Models as a Foundation for Adaptivity
Extensions to the Fractal Component Model
As the Fractal component model is open, we can easily define extensions to meet our remaining requirements /R7/ state management, /R8/ profiling, and /R9/ arbitrary metadata.
2.3.1
State Management
Fractal’s LifeCycleController already provides a method getFcState. But this interface is restricted to life-cycle state (it allows to retrieve whether a component is started or stopped). But especially components that allow access to system resources may show additional observable state. Several invocations with identical parameters at the same server interface may yield different results. Furthermore, components that model activities spontaneously trigger operations at their bound client interfaces. We want to be able to replace an assembly of small components with a single optimised component (and vice versa). So, the state that was observable at the interfaces of multiple components must be transferred to a single component. To fulfil this requirement of state management (/R7/), we specify the interface of a StateController in Listing 2.1. package de . uni−pa s s a u . fmi . c l . f r a c t a l . a p i . c o n t r o l ; import o r g . o b j e c t w e b . f r a c t a l . a p i . c o n t r o l ; interface S t a t e C o n t r o l l e r { any g e t F c I n t e r f a c e S t a t e ( st ri ng itfName ) throws N o S u c h I n t e r f a c e E x c e p t i o n , IllegalLifeCycleException ; void s e t F c I n t e r f a c e S t a t e ( st ri ng itfName , any i t f S t a t e ) throws N o S u c h I n t e r f a c e E x c e p t i o n , IllegalLifeCycleException , IllegalStateException ; }
Listing 2.1: Specification of the StateController interface For each functional interface the observable state can be extracted by calling getFcInterfaceState with the interface’s name. Accordingly, the method setFcInterfaceState allows to change a component’s state for a given interface. The attempt to set an illegal state leads to an IllegalStateException. If a component provides extraction and/or modification of the observable state only during life-cycle state ‘stopped’, it must return an IllegalLifeCycleException to calls that it cannot support.
2.3 Extensions to the Fractal Component Model
2.3.2
15
Profiling
The most important requisite for generic optimisation is a cost model. Unfortunately, few components provide one. A profiler can alleviate the generation of a basic cost model, when used with representative test data. It can also help to fine-tune an existing cost model to the current context. So, in Listing 2.2 we define the interface of a ProfilingController that serves our need for profiling (/R8/). package de . uni−pa s s a u . fmi . c l . f r a c t a l . a p i . c o n t r o l ; import o r g . o b j e c t w e b . f r a c t a l . a p i . c o n t r o l ; interface P r o f i l i n g C o n t r o l l e r { void s t a r t F c P r o f i l i n g ( ) ; void s t o p F c P r o f i l i n g ( ) ; void c l e a r F c P r o f i l i n g D a t a ( ) ; stri ng g e t F c P r o f i l i n g S t a t e ( ) ; I n v o c a t i o n [ ] g e t F c I n v o c a t i o n s ( st ri ng itfName ) throws N o S u c h I n t e r f a c e E x c e p t i o n ; double g e t C u r r e n t R e s o u r c e U s a g e ( st ri ng rscName ) ; double getPredictedResourceDemand ( st ri ng rscName ) ; } interface Invocation { any getTimeStamp ( ) ; int getExecutionTime ( ) ; bool h a d R o l e S e r v e r ( ) ; any g e t C a l l e r ( ) ; any g e t C a l l e e ( ) ; int g e t R x T r a f f i c ( ) ; int g e t T x T r a f f i c ( ) ; }
Listing 2.2: Specification of the ProfilingController interface A ProfilingController comes with its own life-cycle, which can be controlled via the methods startFcProfiling and stopFcProfiling and queried via getFcProfilingState. The initial life-cycle state of a newly created component with this interface is up to the implementation of the controller. The method clearFcProfilingData allows to reset the previously collected data.
16
Chapter 2 – Component Models as a Foundation for Adaptivity
An implementation should provide getFcInvocations for at least all server interfaces of the component. For each invocation the following information is provided: its point in time (getTimeStamp), its duration (getExecutionTime), its role (hadRoleServer), its communication partner (getCaller or getCallee), and the size of received and transmitted data (getRxTraffic and getTxTraffic).
2.3.3
Metadata
The runtime environment might retrieve the cost model for a component from its own arbitrary datastore. But there should be a standard interface to complement this information from other sources. An independent analyser might want to attach optimisation hints to a component. A component itself might want to provide an alternate cost model depending on its functional state. Therefore, we define the MetadataController interface in Listing 2.3 that allows to attach arbitrary metadata to a component (/R9/). package de . uni−pa s s a u . fmi . c l . f r a c t a l . a p i . c o n t r o l ; import o r g . o b j e c t w e b . f r a c t a l . a p i . c o n t r o l ; interface MetadataController { any getMetadata ( st ri ng mtdName ) ; void setMetadata ( st ri ng mtdName , any metadata ) ; }
Listing 2.3: Specification of the MetadataController interface
17
Chapter 3 Implementation Strategies for Self-Adaptivity By comparing several frameworks for self-adaptive solutions we identified the following issues as main design decisions for self-adaptivity: What are the goals of the adaptations? In Grid computing, these are mainly performance, quality of service, and fault-tolerance. Performance optimisation includes the exploitation of appearing resources. Optimising quality of service differs from pure performance optimisation in that it allows trade-offs between (functional and non-functional) requirements. Fault-tolerant systems handle sudden resource disappearance (in contrast to resource reclaiming) gracefully. In ubiquitous computing, context-aware systems shall compensate for continuously changing requirements. Which kind of adaptivity is supported? Possible kinds of adaptations are adjusting of configuration attributes (e.g., cache size), changing the distribution of the application (to exploit available resources), and restructuring. Restructuring an application includes changing implementations, e.g., completely replacing a cache component, to compensate for a change from sequential access to random access. Obviously, this question is related to the first question of adaptation goals: A fault-tolerant system must not only be able to adapt the structure and the distribution of the application. It must also restore consistency, e.g., by reverting the application’s state to a known checkpoint. When should adaptations occur (activation)? Adaptations can be part of a continuous or scheduled optimisation process. Adaptations can also be the response to events triggered by changes in sensor values. Sensors may monitor resource availability, application performance and structure, as well as usage patterns and can themselves be combined to higher-level logical sensors. Finally, the user may explicitly request the application to be restructured (a sensor for user demands and perceived performance). How to decide which adaptations improve the situation (selection)? In most systems, anticipated events (or groups of events) lead to explicitly specified adap-
18
Chapter 3 – Implementation Strategies for Self-Adaptivity
tations (possibly parameterised with the specific event and/or event properties). In these cases the expressiveness of event definition is of utmost importance to allow for well adjust adaptations. Adaptations may be guarded by conditions that include state about previous events. In hierarchical reconfiguration the semantics of an adaptation command depend on the structure of the involved components. The reconfiguration policy may be static or itself may evolve during the application’s lifetime. How to implement adaptivity? Adaptivity may be implemented by the application or by the involved components itself. But the implementation is significantly alleviated if a framework is used that supports adaptivity. The implementation does not only have to perform the reconfiguration but also solve technical problems. E.g., the execution of an adaptation must preserve consistency. It may be necessary that an adaptation is delayed until the application reaches a safe state (which is usually done by synchronously calling an appropriate life-cycle operation). Layaida and Hagimont [LH05, Sec. 2] distinguish five main approaches to selfadaptivity, which differ primarily in the last three problems of activation, selection, and implementation of adaptivity: Static, hard-coded reconfiguration policies The simplest form of self-adaptivity is provided by applications that are not based on any framework or library but directly and explicitly implement anticipated adaptations, e.g., a network application, that features congestion control algorithms. Component-based frameworks with reconfiguration capabilities A component framework may alleviate the implementation of adaptivity. The developer has complete freedom in (i.e. responsibility for) the design of timing and decision. Component-embedded reconfiguration policies Functional components may recognize an overstress or a violation of quality of service and notify their neighbours about this condition. The neighbours can ignore notifications, redistribute them, or adapt accordingly. This especially suits pipelines of components. “In Microsoft DirectShow for example, processing components (called filters) exchange in-stream QoS messages travelling in the opposite direction of the data flow.” [l.c.] Separate [hard-coded] reconfiguration managers The adaptations are controlled by a separate entity. For the parts of the application that need to be adapted, knowledge is collected at one point. Preferably, the manager has perfect knowledge about the structure of the whole application. A hardcoded reconfiguration manager needs to be updated, when new components allow new configurations.
19
Scripting languages for reconfiguration Similar to a reconfiguration manager, but the adaptations are specified in a domain-specific language (DSL) tailored to reconfiguration. Some approaches do not cover the description of application structure (which components generate events and which need to be adapted). In these cases, the language is only used to easily reconfigure an application specific reconfiguration manager. Layaida and Hagimont regard [hard-coded] reconfiguration managers difficult to implement and to maintain. They do not consider the possibility of a generic reconfiguration manager. Aldinucci et al. [AAB+ 05] present a general model of the tasks that need to be performed to adapt a distributed application (see Figure 3.1). They address the last three of our aforementioned design problems, namely activation (which they call trigger), selection (policy), and implementation (commit). They stress that their model is intentionally under-specified. The model’s general applicability should not be restricted by unnecessary constraints like a strict time ordering.
Figure 3.1: Abstract schema of an adaptation manager [AAB+ 05, Fig. 1]
decide The decision phase provides the commit phase with an abstract adaptation strategy. It operates on the application structure and behaviour. It is composed of: trigger The trigger phase determines, when an adaptation should occur. “It is essentially an interface towards the external world, assessing the need to perform corrective actions.” [l.c.] policy The policy phase selects an appropriate strategy that meets the current situation. commit The commit phase maps the adaptation strategy to an implementation that is based on the available run-time support. It is split into different items:
20
Chapter 3 – Implementation Strategies for Self-Adaptivity
plan The planning phase comes up with a list of steps that need to be performed to implement a given adaptation strategy. execute The execute phase performs the steps of a plan. It determines a legal timing for the steps and relies on the mechanisms that are provided by the run-time support. Below, we discuss three noticeable different approaches to these design problems.
3.1
Dynaco/AFPAC
Dynaco [ABP04] is a framework that alleviates the creation of self-adaptive components. AFPAC [BAP06] extends Dynaco for self-adaptive parallel components. Dynaco and AFPAC focus on intra-component adaptation. Although an adaptation at one component can trigger adaptations at its neighbours through renegotiation of service contracts, there are no methods for restructuring at the application level (e.g., insertion of components). Buisson, Andr´e, and Pazat take components with long-running operations into consideration. Such components may need to be adapted during an operation (intra-operation). In contrast to that, components with short-running operations can be stopped gracefully (complete their current operation), when an adaptation is necessary. So, Dynaco and AFPAC introduce adaptation points, at which a running component may be safely reconfigured. The authors also discuss adaptations in the past. If adaptation points were checkpoints, they could be used to adapt in the past. Of course, this comes with the overhead of reverting the component’s state to the checkpoint. Furthermore, if the component has communicated with the rest of the application, there have to be mechanisms to revert the whole application state. On the other hand, adaptation in the past allows an adaptation strategy to be implemented immediately. So, it is easier to make predictions about the impacts of an adaptation in the past rather than an adaptation in the future. In general, it is impossible to predict whether the next adaptation point will be reached at all. Nevertheless, Dynaco performs adaptation in the future. Usually the very next future adaptation point is chosen. In AFPAC, with components that feature SPMD1 -parallelism it may not be necessary to synchronise the concurrent threads (or processes) at the adaptation point. Instead, each thread may adapt independently, as soon as it reaches the point [BAP05]. Figure 3.2 shows the Dynaco/AFPAC model of a self-adaptive component. Monitors provide information to a decider. The decider relies on component-specific information from a policy. It may demand a strategy to be performed. A 1
single program multiple data
3.2 PLASMA
21
Figure 3.2: Model of a parallel self-adaptable component [BAP06, Fig. 2.1]
planner uses component-specific information from a guide to translate an adaptation strategy into a plan of necessary invocations of actions. The executor interprets the plan. The coordinator is part of the executor and chooses the adaptation points at which the actions are invoked. The actions modify the functional part of the component: the service. A possible action is to change the parallel degree of a component. This way, a single component can exploit resources as they become available. The authors also stress that reduction of intra-component parallelism is useful for resource reclaiming, e.g., announced maintenance.
3.2
PLASMA
PLASMA2 [LH05] is a framework based on Fractal components that allows to build self-adaptive multimedia applications. In PLASMA an application is composed of functional media components and monitoring and reconfiguration components. Media components are grouped in three hierarchical levels. On the lowest level, media primitives (MP) provide processing entities. Higher-level tasks are implemented in Media composites (MC), which are composed from media primitives. Primitives and composites can be connected via typed stream interfaces to form streaming pipelines. Finally, media-sessions (MS) encapsulate media components and represent an application. This strict structure reflects in a simple and readable architecture description language (ADL). At each functional level monitoring and reconfiguration components can be inserted. There are three types of such adaptation components. Probes measure quality of service values or gather resource states. Sensors monitor probes and trigger events. Events can further be processed (e.g, by event-composers 2
PLASMA stands for plattform for self-adaptive multimedia applications
22
Chapter 3 – Implementation Strategies for Self-Adaptivity
or event-filters). Actuators listen for events and perform adaptations (adjusting and restructuring). Since probes, sensors, and actuators are components, they can itself be subject of adaptation. The reconfiguration policy is dynamically reconfigurable. PLASMA features hierarchical reconfiguration. An adaptation command is issued at a (possibly composite) component. The component may then decide the best implementation for the adaptation command based on its content. This includes delegation of the adaptation to its children.
Figure 3.3: Structure of a video streaming server [LH05, Fig. 6]
Figure 3.3 shows the architecture of an example application (a video streaming server). A single media-session encloses all other components. It contains a pipeline of three media-components: a capturing component, an encoding component, and a network component. A QoS probe monitors the network component. A sensor monitors this probe and may activate an actuator that adapts the encoding component. The encoding component delegates the execution of the adaptation to its encoding primitives.
3.3 SAFRAN
3.3
23
SAFRAN
SAFRAN3 [DL05a, Dav05] is a framework based on Fractal components that provides domain-specific languages for the development of self-adaptivity. It mainly consists of three parts. WildCAT WildCAT [DL05b] is a framework for context-awareness. It offers a model for the implementation of providers of context information. WildCAT allows access to conforming providers through a data retrieval interface (pull mode) and through notifications (push mode). The notification interface features basic events (e.g., attribute changed). It also supports the registration of synthetic expressions, which can be monitored for change (expression changed), and of custom predicates, which are watched for transitions from false to true (condition occured). WildCAT is used in SAFRAN to monitor changes in the execution environment. Adaptation Controller The adaptation controller handles selection. It is a Fractal controller and allows to attach adaptation policies to Fractal components. A policy determines which adaptations should occur on specific events. It consists of one or more event-condition-action rules (ECA). Events can originate from WildCAT (exogenous events) or from the membrane of an arbitrary component (endogenous events). The component to which a policy is attached is the starting point for a possible adaptation, but adaptations are not restricted to a single component. They can in fact reconfigure the whole application. FScript FScript [DL06] is a language that allows to define the necessary steps to implement an adaptation. FScript features reconfiguration primitives for the modification of containment, bindings, life-cycle, and attributes. It also allows to create new components. FScript is a language with high expressiveness which for a large part can be attributed to its FPath notation. FPath allows to address components based on their position in the containment hierarchy, on their role in collaboration, and on their properties. FPath was inspired by XPath but has nothing to do with XML. A Fractal architecture is seen as a directed graph, where different kinds of nodes represent components, interfaces, attributes, and methods. The relations between these architectural elements are represented by different types of edges (see Figure 3.4). An FPath expression allows to navigate on such a graph. It consists of steps, which are separated by a forward slash (/). A single step consists of up to three elements: axis::test[predicate] (the predicate is optional). Each step takes a 3
SAFRAN stands for self-adapting Fractal components
24
Chapter 3 – Implementation Strategies for Self-Adaptivity
Figure 3.4: A simple Fractal architecture and its representation as a graph
starting set of nodes from which it follows a certain type of edges (determined by the axis). The reached neighbours are in the result set if they pass the name test (test) and fulfil any given predicate. action b i n d c l i e n t s ( cmp ) = { // s e l e c t s a l l unbound c l i e n t i n t e r f a c e s o f d i r e c t c h i l d r e n o f $cmp c l i e n t s := $cmp/ ch i l d : : ∗ / i n t e r f a c e : : ∗ [ not (bound ( . ) ) ] ; foreach c l i e n t in c l i e n t s do { // s e l e c t s t h e i n t e r f a c e named a5 o f $cmp s e r v e r := $cmp/ i n t e r f a c e : : a5 // c r e a t e s a new b i n d i n g bind ( $ c l i e n t , $ s e r v e r ) ; } } policy m a g i c b i n d i n g = { rule { when e v e n t 1 i f ( c o n d i t i o n 1 ) do { b i n d c l i e n t s ( $target ) ; } } }
Listing 3.1: A simple FScript action and policy Listing 3.1 shows a simple FScript action and policy. The action bindclients takes a component cmp and searches for unbound interfaces in its direct children. These interfaces are then bound to an internal server interface of cmp named a5. The policy magicbinding performs this action on reception of the event event1 and under the condition condition1. In a policy, $target is set to the component that this policy is attached to.
3.4 Comparison
25
Figure 3.5: Result of an FScript action
Figure 3.5 shows the result of the action bindclients when performed on the architecture from Figure 3.4. A new binding (shown in bold) is created between the interfaces c3 and a5.
3.4
Comparison
The three discussed frameworks have different goals and feature different approaches to the challenge of self-adaptivity. Table 3.1 summarises their main characteristics. Dynaco and PLASMA both deal with long-running operations. Dynaco therefore introduces adaptation points, which significantly attribute to the complexity of the framework’s implementation. This results in a fine-grained adaptivity even for large components. In PLASMA there is no such concept. Its model of hierarchical adaptation delegates the implementation of adaptivity to the functional components. But we expect no difficulties from this lack of explicit adaptation points. In PLASMA’s domain (streaming multimedia) implicit adaptation points can easily be found (e.g., after each processed frame or packet). SAFRAN does not provide support for adaptivity during a running operation. Dynaco/AFPAC only supports intra-component adaptivity. It allows to adjust a component’s behaviour and change the degree of parallelism of a single parallel component. This includes limited redistribution capabilities. A component can spawn processes on nodes as they become available. PLASMA and SAFRAN have no explicit support for distributed execution and therefore also lack redistribution capabilities. But SAFRAN provides explicit restructuring facilities. New components can be created. Containment and collaboration can be manipulated. In PLASMA restructuring can be easily implemented directly in the functional components or in specialised actuators.
26
Chapter 3 – Implementation Strategies for Self-Adaptivity
Only PLASMA allows to reconfigure the reconfiguration mechanisms. Probes, sensors, and actuators are first-class citizens. They are implemented as Fractal components (just like functional concerns) and can therefore be subject to the normal adaptation process. Both Dynaco and PLASMA require that the actual adaptation steps are implemented in the programming language of the functional components. In Dynaco, the developer has to provide actions that can be weaved with the functional code. In PLASMA, a functional component includes code for hierarchical adaptation – either directly or as part of its membrane. On the other hand, SAFRAN provides domain-specific languages for the whole adaptation process. All three frameworks require that adaptations are explicitly provided. We did not encounter a generic solution for adaptation selection.
activation / trigger selection / policy plan execute
adjusting, intra-component redistribution monitors, component-specific policies (implementation of an interface), static component-specific guides component-specific actions
performance
sensor components monitor probe components actuator components, dynamic hierarchical reconfiguration based on Fractal
PLASMA streaming multimedia applications quality of service, performance adjusting, restructuring
DSL (policy), static DSL (FScript) based on Fractal
WildCAT
general purpose, performance adjusting, restructuring
SAFRAN general purpose
Table 3.1: Comparison of the self-adaptivity frameworks Dynaco, SAFRAN, and PLASMA
implementation / commit
decide
supported adaptivity
goals
domain
Dynaco/AFPAC Grid computing
3.4 Comparison 27
28
Chapter 4 – A Generic Approach to Self-Adaptivity
Chapter 4 A Generic Approach to Self-Adaptivity In this chapter we propose a generic approach to the problem of selecting an appropriate adaptation. We describe algorithms for two types of adaptations: redistribution and restructuring. To avoid confusion between nodes in our graph algorithms and nodes on which a distributed application executes, we call the latter processors from now on. One must not assume a one-to-one mapping from these virtual processors to physical computers or virtual machines.
4.1 4.1.1
Motivation Redistribution
Consider a simple Fractal architecture that consists of four components in a pipeline (see Figure 4.1 a). The components run concurrently. We have two processors available and we want to optimise performance by finding the best possible distribution on these two processors. Obviously, there are 24 = 16 possibilities for this. The two processors have identical performance so one half of the possible distributions is symmetrical to the other half. But of the remaining 8 distributions each one is optimal for a special behaviour of the involved components. In our example we focus on concurrency (execution behaviour) and communication costs (communication behaviour). We assume that in our example application from Figure 4.1 a computation consists of two phases in which different components are active concurrently. A black dot indicates execution during phase 1 only, while a white dot stands for execution at phase 2. A thin line between interfaces represents a binding with low traffic and a thick line shows a high-traffic binding. Figures 4.1 b – d give examples of different execution and communication behaviour. We want to minimise conflicts in CPU access by concurrent components. We
4.1 Motivation
29
Figure 4.1: Optimal distributions for different execution and communication behaviour in a simple Fractal architecture
therefore distribute them on different processors (see Figure 4.1 b, c). But a hightraffic binding between two concurrent components can require that they stay on the same processor (see Figure 4.1 d). Of course, conflicts are not limited to CPU access. We also want to minimise conflicts for other types of resources, e.g., main memory or disk space.
4.1.2
Restructuring
While the fine-grained structure of an application composed from many small components provides flexibility in redistribution, it also introduces overhead. Component membranes are layers of redirection and cause delays in operation invocations. So, we want to replace small adjacent components with larger components that exploit locality.
30
Chapter 4 – A Generic Approach to Self-Adaptivity
Moreover, if two distributed components A1 and A2 depend on the same helper component H, this helper can only be co-located to either A1 or A2 . If H is stateless we can reduce remote invocations by splitting it in two distributed instances H1 and H2 . Maybe H has no observable functional state but does cache results to improve performance. Should A1 and A2 become co-located in the future, we would want to merge H1 and H2 again.
Figure 4.2: Restructuring of a) two composite components into b) a single composite component
In Figure 4.2, we examine this possible example of restructuring. A pipeline of two composite components (see Figure 4.2 a) can be combined into a single composite component (see Figure 4.2 b) and vice versa. The first form allows efficient distributed execution of A1 and A2 . In the second form A1 and A2 need to be co-located because of their high-traffic bindings to H. But we eliminate two membranes in the communication between A1 and A2 . Furthermore, we save memory because of a reduced number of components and possibly benefit from an improved caching behaviour at H.
4.2 Redistribution
4.2 4.2.1
31
Redistribution Notation
We use the following notation to describe the optimisation problem of redistribution. Let u1 to un be the unit vectors in Rn : u1 = (1, 0, 0, . . . , 0) u2 = (0, 1, 0, . . . , 0) .. . un = (0, 0, . . . , 0, 1) n X 1= ui = (1, 1, 1, . . . , 1) i=1
0 = (0, 0, 0, . . . , 0) Let x = (x1 , . . . , xn ) ∈ Rn and y = (y1 , . . . , yn ) ∈ Rn . The canonical scalar product is written as h , i : (Rn × Rn ) → R: h(x1 , . . . , xn ), (y1 , . . . , yn )i = x1 y1 + . . . + xn yn The Euclidean norm is written as k k : Rn → R: p kxk = hx, xi |x|c stands for the absolute value by components: |x|c = |(x1 , . . . , xn )|c := (|x1 |, . . . , |xn |) Correspondingly, diffc (x, y) is the difference by components: diffc (x, y) := |x − y|c maxc (x, y) is the maximum by components: maxc ((x1 , . . . , xn ), (y1 , . . . , yn )) := (max(x1 , y1 ), . . . , max(xn , yn )) Let n be the number of involved components and C = {C1 , . . . , Cn } be the set of components. Also, let m be the number of available processors and P = {P1 , . . . , Pm } be the set of processors. The distribution of components to processors can be described by a function ds : C → P. The inverse function ds−1 subdivides C into up to m partitions: Cds := {ds−1 (P ) | P ∈ P}
32
Chapter 4 – A Generic Approach to Self-Adaptivity
4.2.2
Resources
There are two kinds of resources. Some resources are available on all processors, e.g., CPU cycles and main memory. Such a general resource is readily available in a certain amount. If a component’s request for a resource is under this amount, the component can be served immediately. If a request exceeds this amount, it will still be fulfilled, but performance will degrade for all components that use this resource on the given processor. E.g., a request for main memory can always be fulfilled by using virtual memory and we do not consider swap space to be a limiting factor. In our model a resource conflict causes a performance penalty but never leads to a deadlock. So, a processor’s ability to host a specific component is not constrained by the component’s demand of general resources. The second kind consists of resources that are unique to a specific processor, e.g., a physical sensor or the terminal that is operated by the end-user. A component’s dependency on such a unique resource ties the component permanently to a specific processor. Such a fixed mapping can be described with a function fm : C → P ∪˙ {}, where is marker for no dependency. A distribution that respects such a mapping must fulfil the following constraints: ∀ C ∈ C \ fm−1 () : ds(C) = fm(C) Time-dependent model Let k be the number of different general resources and R = {R1 , . . . , Rk } be the set of general resource types. Processors are providing the different types of resources in varying amounts: rp : P → (R → Rk ) rpP : R → Rk , t 7→ rp(P )(t)
rpP,i : R → R , t 7→ rpP (t), ui rpP (t) returns a vector that consists of the amounts of the k resource types that are provided by the processor P at the time t. rpP,i (t) selects the ith element of this vector. The development of resource demand of a component C is dependent on the distribution. It is specified correspondingly: rdds : C → (R → Rk ) rdds,C : R → Rk , t 7→ rdds (C)(t)
rdds,C,i : R → R , t 7→ rdds,C (t), ui A resource conflict occurs at a processor P for a given distribution, when X ∃ i, t : rdds,C,i (t) > rpP,i (t) for 1 ≤ i ≤ k C ∈ ds−1 (P )
4.2 Redistribution
33
In our model, the severity of a resource conflict linearly depends on its duration and the magnitude of overload. We use a vector of resource weights (rw ∈ Rk ) to compare the different resource types. This allows us to calculate the total severity of resource conflicts (rc ∈ R) in a timeframe (t1 by t2 ) for a given distribution (also see Figure 4.3): * X + Z t2 rcP := rw, maxc 0, rdds,C (t) − rpP (t) dt t1
rc :=
X
C ∈ ds−1 (P )
rcP
P ∈P
Figure 4.3: The components A, B, and C are allocated to the same processor P . The development of resource demand leads to a resource conflict (stacked view). rc depends on exact timing information. Unfortunately, the timing information for one distribution is not applicable to another distribution. As the following example shows, even small changes in the execution behaviour of the application can result in significant changes in rc. Figure 4.4 a shows the execution behaviour of two independent components A and B in a sequence diagram. A and B are running on different processors. The timing information could have be generated by profiling an actual execution of the application. Only in one case, the two components are active at the same time. One would expect that they interleave well, when allocated to the same processor. But a different distribution could cause a single communication at component initialisation to take longer. In Figure 4.4 b A and B are colocated, but the
34
Chapter 4 – A Generic Approach to Self-Adaptivity
Figure 4.4: a), b) Sequence diagram of the execution behaviours of two components for different distributions. c) impact of a security clearance to resource conflicts.
execution of component A is delayed. This triggers a number of resource conflicts, which lead to performance degradation compared to the initial distribution. A possible alleviation of this problem is the introduction of security clearances (see Figure 4.4 c). By enlarging the timeframe for each resource demand, potential resource conflicts due to delays after component migration become visible. But this method can only compensate for small offsets and therefore only handle small refinements of a given distribution. However, large-scale redistributions can lead to completely different timing information. Furthermore, it is difficult to provide accurate timing information about the resource usage. It depends on hardware characteristics that may only be available at deployment time. It is complicated to integrate such characteristics in a general cost model of the application. The only safe way to determine timing for an arbitrary application is to measure its execution – provided that the used profiler does not introduce bias. Constant Demand Predictions We consider a more robust and less expensive model for resource conflicts than complete information about each single request. Instead of trying to optimise an application according to its behaviour in an arbitrary timeframe (t1 by t2 ), we restrict ourselves to adapt only for phases which show a continuous characteristic
4.2 Redistribution
35
behaviour. For such phases with low fluctuation the resource demand can be aggregated in a single vector. Overall, we consider the resource demand to be constant in sections. If we adapt at every phase change, we need to handle constant resource demand only. The demand can be measured or a component can be queried to predict its future demand. prp : P → Rk prpP = prp(P ) ∈ Rk prpP,i = hprpP , ui i ∈ R prd : C → Rk prdC = prd(C) ∈ Rk prdC,i = hprdC , ui i ∈ R prpP,i is the constant amount of the ith resource that processor P will provide. prdC,i predicts the amount of the ith resource that component C will demand. The resource demand is no longer dependent on the distribution. A resource conflict occurs at a processor P , when X prdC,i > prpP,i for 1 ≤ i ≤ k ∃i : C ∈ ds−1 (P )
Again, the severity of a resource conflict linearly depends on the magnitude of overload: * + X prdC − prpP prcP := rw, maxc 0, C ∈ ds−1 (P )
prc :=
X
prcP
P ∈P
If the processors are homogeneous (∀i, j ∈ 1, . . . , m : prpPi = prpPj ), we define prpP := prpP1 = . . . = prpPm In this case, we want to distribute resource demand equally on all available processors, i.e., we want to minimise * + X X prdd := rw, diffc prpP , prdC P ∈P
C ∈ ds−1 (P )
36
Chapter 4 – A Generic Approach to Self-Adaptivity
One can regard this as a weighted k-dimensional PARTITION-problem for m partitions. The classic PARTITION-problem is defined as follows. Let A be a multi-set of positive integers. Is there a subset A1 ⊂ A with X X ai = aj ai ∈A1
aj ∈A\A1
PARTITION belongs to the 21 problems that Richard Karp proved to be NPcomplete [Kar72]. PARTITION can trivially be reduced to our optimisation problem. So, minimising Eprd is at least NP-hard.
4.2.3
Communication
Remote invocations from a component on P1 to a component on P2 cause delays that are determined by the bandwidth and the latency of the network between P1 and P2 as well as the transferred data volume and the number of communication startups. Time-dependent model If the communicating processors are not in the same network segment, the performance is determined by the available bandwidth and the latency of any intermediate segment on the path. We only consider network topologies that can be represented as trees, so all processors are connected by exactly one path. E.g., in Figure 4.5, a communication between P3 and P7 occupies bandwidth in LAN A, LAN C, and the WAN. We can model the bandwidth as a shared resource. Let s be the total number of network segments and S = {S1 , . . . , Ss } be the set of segments. The function ns : (P × P) → P(S) returns the set of intermediate segments between two processors. In the example of Figure 4.5: ns(P3 , P7 ) = {LAN A, WAN, LAN C}. In Grid computing, inbound and outbound bandwidth of the involved networks are usually symmetric. Therefore, we can define bdw : S → N ∪ {∞} to be the network bandwidth (in Bytes/second) that a segment provides. We define the local bandwidth to be infinite: bdw(∅) := ∞. We assume QoS mechanisms to be in place, so that the latency is independent of network utilisation. The function lat : (P × P) → R
4.2 Redistribution
37
Figure 4.5: Several network segments with different bandwidth
returns latency in seconds between to processors. The local latency is zero: lat(P, P ) := 0. Every invocation between two components can be expressed by a tuple: I = (C, C 0 , t, v) ∈ (C × C × R × N) C is the calling component, C 0 is the callee, t is the time of the invocation, and v is the transferred data volume. Let I be the set of all invocations. We examine isolated data-transfers, i.e., we assume that the complete bandwidth of the network is available to a single invocation. The following function computes the duration of an isolated data-transfer for a given distribution without latency. durds : I → R durds : (C, C 0 , t, v) 7→
v n o min bdw(S) | S ∈ ns ds(C), ds(C 0 )
As I is finite, we can assign a unique natural number to every invocation. Let e : I → {1, . . . , |I|} be an arbitrary bijection. We define a total order on I. Let I1 = (C1 , C10 , t1 , v1 ) ∈ I and I2 = (C2 , C20 , t2 , v2 ) ∈ I, then I1 I2 :⇔ t1 < t2 ∨ t1 = t2 ∧ e(I1 ) ≤ e(I2 )
38
Chapter 4 – A Generic Approach to Self-Adaptivity
This allows us to determine whether the timeframes of two data-transfers overlap, i.e., whether they conflict. We define ( X min t1 + durds (I1 ), t2 + durds (I2 ) − t2 if t2 < t1 + durds (I1 ) ovl := 0 otherwise I1 ,I2 ∈I I1 ≺I2
ovl measures the severity of communication conflicts. For each pair of conflicting data-transfers, the length of the overlap is added. We sum up the latencies of all invocations with X slt := lat ds(C), ds(C 0 ) (C,C 0 ,t,v)∈I
The total costs for communication for a given distribution in the time-dependent model are therefore X durds (I) + ovl + slt tdcc := I∈I
Of course, the summands of tdcc could be weighted. E.g., one could emphasise conflict avoidance. There are trivial solutions that will lead to tdcc = 0. Distributions that allocate all components to a single processor avoid remote invocations at all. So, the optimisation of tdcc is only meaningful in combination with other restraints, e.g., even resource exploitation. Constant Communication Costs For networks that consist only of a single segment, we suggest a simpler model. The bandwidth and latency can be given as constants: cbdw ∈ N, clat ∈ R. We aggregate the costs for communication between a pair of components cce : (C × C) → R P (C,C 0 ,t,v)∈I cce : (C1 , C2 ) 7→ C=C1∧C 0 =C2 0
v cbdw
+ clat if C1 6= C2 otherwise
The complete costs for remote communication are then X ccc := cce(C1, C2) C1 ,C2 ∈C ds(C1 )6=ds(C2 )
We can interpret the problem of minimising cce as a graph cut problem. C is the set of nodes in an undirected graph G = (C, E). {C1 , C2 } is an edge (∈ E) in this graph iff cce(C1 , C2 ) > 0 ∨ cce(C2 , C1 ) > 0. An edge {C1 , C2 } has the weight ccew{C1 , C2 } := cce(C1 , C2 ) + cce(C2 , C1 ). We are then seeking for a partitioning Cds such that the sum of edge weights is minimal in the cut.
4.2 Redistribution
4.2.4
39
Physical interpretation
If we restrain ourselves to constant demand predictions for homogeneous nodes and constant communication costs in a single network segment, we have to find a distribution that minimises c := α prdd + β ccc As the optimisation of prdd can be reduced to the optimisation of c, this restricted problem is still NP-hard. We suggest a physical interpretation of the problem that allows us to apply known iterative heuristics (see Figure 4.6). We take the graph G = (C, E) (as defined in section 4.2.3), the function prd : C → Rk that attaches a vector to every node, and the function ccew : E → R that specifies the weight of every edge. We regard the nodes to be physical bodies with locations in Rd . As high communication costs demand components to be colocated, we consider the edges to be attractive forces between adjacent nodes. As resource conflicts demand components to be distributed, the vectors describe repelling forces that take effect on all nodes that use the same resource. We then use a discrete simulation to compute the effects of these forces. This approach corresponds to spring embedder algorithms in graph layouting [Ead84]. After the simulation we use a clustering technique to determine the partitioning. Discrete Simulation Our algorithm for discrete simulation is based on the implementation of spring embedder as described by Forster [For04, p. 285-323]. At first we need to determine d. In Figure 4.6 we use d = 2, but for any larger example at least three dimensions should be used. Harel and Koren report good results for high-dimensional spring embedders (e.g., d = 50) [HK04]. The placement of nodes in Rd can be described as a function δ : C → Rd As an initial placement δ0 we suggest to distribute the nodes uniformly on a hypersphere. Based on a placement we can compute the force that acts on a node v: force : C × (C → Rd ) × Rd X ccew{v, w} δ(w) − δ(v)
force : (v, δ) 7→ κ ln
δ(w) − δ(v) (attractive forces) λ w∈C, {v,w}∈E, δ(v)6=δ(w)
+
X w∈C, δ(v)6=δ(w)
µ rd(v), rd(w) δ(v) − δ(w)
δ(v) − δ(w) 2 δ(v) − δ(w)
(repelling forces)
40
Chapter 4 – A Generic Approach to Self-Adaptivity
Figure 4.6: A physical interpretation of the optimisation problem allows to apply a discrete simulation The three parameters κ, λ, µ ∈ R allow to fine-tune the effect of distance on attractive and repelling forces. We use T ∈ R to denote a temperature that describes the plasticity in an iteration. An iteration calculates a new placement δi+1 from a placement δi : dsiterate : (C → Rd ) × R → (C → Rd ) dsiterate : (δi , Ti ) 7→ δi+1 with δi+1 : v 7→ δi (v) + Ti force(v, δi ) A function temp : R × (C → Rd ) × (C → Rd ) → R specifies the development of temperature from one iteration to the next. By comparing δi−1 and δi , temp can account for rotations and vibrations in the placement. A placement-agnostic example is temp : (Ti , δi−1 , δi ) 7→
1 Ti 3
4.2 Redistribution
41
The algorithm Discrete Simulation is specified in Listing 4.1. It terminates, when the temperature falls below Tmin . p r e v D e l t a := null ; c u r r D e l t a := δ0 ; mintmp := Tmin ; tmp := T0 ; while tmp >= mintmp do { p r e v D e l t a := c u r r D e l t a ; c u r r D e l t a := d s i t e r a t e ( p r e v D e l t a , tmp ) ; tmp := temp ( tmp , p r e v D e lt a , c u r r D e l t a ) ; }
Listing 4.1: Algorithm Discrete Simulation
Clustering The resulting placement δr can then be subject to a clustering technique. We examine the k-means algorithm [Mac67]. Let X be the scatter-plot of δr : X := δr (C) The centroid of a set of points Y ⊂ Rd is centroid : P(Rd ) → Rd 1 X centroid : Y 7→ y |Y | y∈Y A partitioning of X into k partitions can be expressed as a function p : X → {1, . . . , k} An iteration computes a new partitioning pi+1 from a given partitioning pi : kmiterate : (X → {1, . . . , k}) → (X → {1, . . . , k}) kmiterate : pi 7→ pi+1 pi+1 : x 7→ j
centroid p−1 (j) −x i n o −1
= min centroid pi (l) −x | l ∈ {1, . . . , k}
with with
The algorithm k-means starts with a partial partitioning p0 , such that each partition contains exactly one unique point of the set X. These points may be selected randomly. It then iterates for a predefined number of passes. k-means returns exactly k partitions. Thus, it is necessary to execute it with different k ∈ {1, . . . , |P|} to find distributions that do not use all processors.
42
4.3
Chapter 4 – A Generic Approach to Self-Adaptivity
Restructuring
We want to specify restructuring patterns that can be used to modify an application’s architecture. We use a graph notation similar to that of SAFRAN’s FScript (see section 3.3) to describe the architecture of a whole application, as well as restructuring patterns. The nodes of such a directed graph are composed of all involved components and of the component’s interfaces. Edges that combine components with components express containment (child edges). All components are connected to their interfaces (interface edges). Finally, edges between interfaces represent bindings (binding edges). Let C be the set of components, F be the set of interfaces, and T be the set of types. (Ga , t, ie ) is an architectural description, iff Ga = (V, E) is a directed graph with V ⊆ C ∪ F, E ⊆ (C × C) ∪ (C × F) ∪ (F × F) ∩ (V × V ), t :V →T, ie : V ∩ F → {true, false} t returns the type of a component, while ie returns true if an interface is external. A denotes the set of all architectural descriptions: (Ga , t, ie ) ∈ A The boundary of an architectural description contains all external interfaces of components that have no parent in this description. The function bnd computes the boundary: bnd : A → F bnd : (V, E), t, ie 7→ n F ∈ V ∩ F | ie (F ) ∧ ∃ C ∈ V ∩ C : (C, F ) ∈ E ∧ 0
0
@ C ∈ V ∩ C : (C , C) ∈ E
o
4.3 Restructuring
43
A restructuring pattern consists of two architectural descriptions. Since these descriptions should be interchangeable, there has to be an isomorphism that maintains types between their boundaries. (A1 , A2 , r) is a restructuring pattern, iff A1 = (Ga1 , t1 , ie1 ) ∈ A, A2 = (Ga2 , t2 , ie2 ) ∈ A, r : bnd(A1 ) → bnd(A2 ) ∀ F ∈ bnd(A1 ) : t1 F = t2 r(F )
is a bijection with
Each description of a restructuring pattern can be used as a search pattern, as well as a replacement pattern. If we are are searching for a description As = (Vs , Es ), ts , ies ∈ A in a target description At = (Vt , Et ), tt , iet ∈ A, we are testing for subgraph isomorphism. (Vm , m) is a match for As in At , iff Vm ⊆ Vt , Em := Et ∩ (Vm × Vm ), ∀(C, C 0 ) ∈ (Vm ∩ C) × (Vt ∩ C) : (C, C 0 ) ∈ Et ⇔ (C, C 0 ) ∈ Em , (children must be included) m : Vs → Vm
is a bijection with
∀ v ∈ Vs : ts v = tt m(v) ∧ ∀ F ∈ Vs ∩ F : ies F = iet m(F ) ∧ ∀ (v, w) ∈ (Vs × Vs ) : v, w ∈ Es ⇔ m(v), m(w) ∈ Em During a restructuring, all components in the match are removed. Therefore, a component can only be matched, if all of its child components are themselves part of the match. Otherwise, such child components would be orphaned. So, no child edge that starts in a match can point outside of the match. We define the child degree of a node in a graph G = (V, E): cdG : (V ∩ C) → N cdG : v 7→ {w ∈ V ∩ C | (v, w) ∈ E} For every component in a match, the child degree is equal to the child degree of this component in the original architecture: ∀ C ∈ Vm ∩ C : cd(Vt ,Et ) (C) = cd(Vm ,Em ) (C)
44
Chapter 4 – A Generic Approach to Self-Adaptivity
This quality simplifies the search for a match. For every component in the search pattern, we are looking for a corresponding component in the target with identical type and child degree. Let (A1 , A2 , r) be a restructuring pattern and (Vm , m) be a match for A2 in an architecture A. Let bndm := m bnd(A1 ) be the boundary of the match. A proper neighbour of an interface in this boundary is itself not part of the match. A restructuring is implemented by the following actions: 1. All components in Vm ∩ C are stopped. 2. The components described in A2 are created together with their interfaces and specified bindings. 3. For every interface F in the boundary of the match bndm , (a) the observable state at F is extracted and transferred to r m−1 (F ) ∈ bnd(A2 ), (b) every proper predecessor interface Fp of F , as described in the original architecture A ((Fp , F ) ∈ E with Fp ∈ / Vm ), is rebound to r m−1 (F ) ∈ bnd(A2 ), (c) every binding of F to a proper successor interface Fs , as described in the original architecture A ((F, Fs ) ∈ E with Fs ∈ / Vm ), is unbound. −1 A new binding from r m (F ) ∈ bnd(A2 ) to Fs is inserted. 4. The newly created components in V2 ∩ C are started. 5. The replaced components in Vm ∩ C are removed. Restructuring can merge small components, split large components, or completely change the composition of a set of components. To be flexible enough to change implementations, the involved components are not altered during restructuring – they get replaced by new ones. The main goal of the restructuring process is to exploit locality, i.e. to reduce the overhead related to distribution, when a set of collaborating components is running on the same processor. Therefore, restructuring patterns are applied locally.
4.4
Unified approach
Redistribution and local restructuring are conflicting optimisations. If redistribution is performed first, components that could have been merged by restructuring may no longer be colocated. If restructuring is performed first, low-traffic bindings that could have been hyphenation points may be hidden in monolithic components. Therefore, we want to execute both optimisations concertedly.
4.4 Unified approach
45
For every restructuring pattern, we distinguish the more flexible of its two architectural descriptions as general. Usually, this is the description which has more components. The other description is called specialised. If a restructuring pattern is executed with its specialised description as the search pattern and its general description as its replacement pattern, we call this a generalisation. The reverse direction is a specialisation. We call an architecture canonical, if no more generalisation can be performed. To optimise an architecture, we make it seemingly canonical. I.e., we perform every possible generalisation on a description of the architecture, but not on the architecture itself. Each generalisation has performance impact, should it actually have to be performed. This always happens, when the constituents of a generalisation are distributed on different processors. Therefore, we introduce an attractive force that corresponds to these costs between all components of the replacement pattern. Further attractions are inserted for potential specialisations on the canonical description, to represent their expected benefits. The resulting canonical description is enriched with these attractions. The original architecture already contains attractive and repelling forces – the costs for resource demand and communication. These must be preserved during generalisation. Each generalisation has to provide a function that converts information about component behaviour and binding utilisation into information about the replacement components: conv : C → Rk × (C × C) → R → C 0 → Rk × (C 0 × C 0 ) → R conv : (prd, ccew) 7→ (prd0 , ccew0 ) The enriched canonical description can then be optimised with our algorithm from Section 4.2.4. After clustering, specialisations are performed on every partition, which reverts some of the generalisations that were planned before. The resulting description is then applied to the application. It incorporates balanced redistribution and restructuring.
46
Chapter 5 – Conclusions
Chapter 5 Conclusions A large and diverse research community from Grid computing and ubiquitous computing has sparked interest in self-adaptivity. There are numerous projects that provide tool support for the development of self-adaptive applications and many of these projects include extensive, monolithic implementations. The underlying challenge is complex, in that it is both complicated and composite. This calls for joint efforts – but unfortunately, partial solutions of the different implementations are difficult to combine. Aldinucci et al. [AAB+ 05] address this issue by suggesting a general model of the different problems of self-adaptivity. This effort aids in the creation of common terminology. But as the model is deliberately underspecified, it is not aimed at providing interoperability of implementations. It seems, that component models are a least common denominator. Most projects use component frameworks, because they provide access to the structure of an application and allow the integration of non-functional concerns. Fractal [BCS02] is a component model that has gained considerable support. E.g., the projects PLASMA [LH05] and SAFRAN [Dav05] are based on the Fractal component model and ProActive [BBC+ 06] provides a distributed implementation. We have compared PLASMA, SAFRAN, and Dynaco/AFPAC [ABP04] as examples of very different approaches to self-adaptivity. Apart from being based on components, they share one other property: they all see adaptation as a specific reaction on a specific event. We did not encounter a project that treated self-adaptation as a continuous process or as a reaction to an unspecific event, like the complaint of a user about bad performance in general. So, there are sophisticated facilities to combine and filter events. But it seems, that there are no optimisers that analyse execution context and application state as a whole. Instead of providing specific adaptation rules, we suggest to consider generic metadata like cost models. We have chosen two areas of adaptivity (namely redistribution and local restructuring) and have explored generic methods to select the best adaptation from these areas. The underlying optimisation problems are NP-hard. Uniform distribution of resource demand on homogeneous processors can be seen as a multi-dimensional PARTITION problem for multiple partitions.
47
We suggest a heuristics that is based on a physical interpretation. We regard the components as physical bodies. They are subject to attractive and repelling forces that represent communication and resource usage. The search after different interpretations and additional heuristics provides area for future research. Especially from the perspective Grid computing, a fast approximation for resource distribution on heterogeneous processors is desirable. We provide a formal foundation for pattern based restructuring. Patterns have obvious use in local restructuring, where the adaptation decision can be made solely on the basis of the pattern and information about the distribution. But in conjunction with a cost model and a usage profile they could be extended to behaviour-driven restructuring. Together with domain-specific knowledge, patterns could be used to optimise applications on a semantic level. Redistribution and local restructuring are conflicting optimisations. This is understandable, as redistribution changes the allocation of components to processors and this allocation is a basis for the decision process in local restructuring. On the other hand, restructuring changes the granularity of components, which influences the flexibility of redistribution. We suggest to transform an architecture to its finest possible granularity, which we call a canonical description. On this level, potential restructurings can be expressed as attractive forces between components. This allows a unified optimisation of redistribution and restructuring.
48
LIST OF FIGURES
List of Figures 2.1 2.2 2.3 2.4 2.5 2.6 2.7
Graphical notation introduced by Fractal (see text). . . . . . . . . Demonstration of various Fractal features, in particular sharing and visibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . An abstract component and a possible implementation in Julia [BCL+ 04, Fig. 1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . A typical object graph with active objects [BBC+ 06, Fig. 1]. . . . An asynchronous call to a remote active object results in a transparent future [BBC02, Fig. 1] . . . . . . . . . . . . . . . . . . . . An asynchronous call to a distributed typed group yields a transparent future to the result group [BBC02, Fig. 4] . . . . . . . . . . Different distributions for a set of ProActive components [BCM03, Fig. 2] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.1 3.2 3.3 3.4 3.5
Abstract schema of an adaptation manager [AAB+ 05, Fig. 1] . Model of a parallel self-adaptable component [BAP06, Fig. 2.1] Structure of a video streaming server [LH05, Fig. 6] . . . . . . A simple Fractal architecture and its representation as a graph Result of an FScript action . . . . . . . . . . . . . . . . . . . .
4.1
Optimal distributions for different execution and communication behaviour in a simple Fractal architecture . . . . . . . . . . . . . Restructuring of a) two composite components into b) a single composite component . . . . . . . . . . . . . . . . . . . . . . . . . The components A, B, and C are allocated to the same processor P . The development of resource demand leads to a resource conflict (stacked view). . . . . . . . . . . . . . . . . . . . . . . . . a), b) Sequence diagram of the execution behaviours of two components for different distributions. c) impact of a security clearance to resource conflicts. . . . . . . . . . . . . . . . . . . . . . . . . . Several network segments with different bandwidth . . . . . . . . A physical interpretation of the optimisation problem allows to apply a discrete simulation . . . . . . . . . . . . . . . . . . . . . .
4.2 4.3
4.4
4.5 4.6
. . . . .
. . . . .
6 7 9 10 11 12 13 19 21 22 24 25 29 30
33
34 37 40
LISTINGS
49
Listings 2.1 2.2 2.3 3.1 4.1
Specification of the StateController interface . . . Specification of the ProfilingController interface Specification of the MetadataController interface A simple FScript action and policy . . . . . . . . . Algorithm Discrete Simulation . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
14 15 16 24 41
50
LIST OF TABLES
List of Tables 3.1
Comparison of the self-adaptivity frameworks Dynaco, SAFRAN, and PLASMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
27
BIBLIOGRAPHY
51
Bibliography [AAB+ 05] M. Aldinucci, F. Andr´e, J. Buisson, S. Campa, M. Coppola, M. Danelutto, and C. Zoccolo. Parallel program/component adaptivity management. In ParCo 2005: Parallel Computing, Malaga, Spain, 2005. [ABP04]
Fran¸coise Andr´e, J´er´emy Buisson, and Jean-Louis Pazat. Dynamic adaptation of parallel codes: toward self-adaptable components for the grid. In Vladimir Getov and Thilo Kielmann, editors, Component Models and Systems for Grid Applications, pages 145–156. Springer, June 2004. Proceedings of the Workshop on Component Models and Systems for Grid Applications held June 26, 2004 in Saint Malo, France. ISBN 0-387-23351-2.
[BAP05]
J´er´emy Buisson, Fran¸coise Andr´e, and Jean-Louis Pazat. Dynamic adaptation for grid computing. In Peter M. A. Sloot, Alfons G. Hoekstra, Thierry Priol, Alexander Reinefeld, and Marian Bubak, editors, EGC, volume 3470 of Lecture Notes in Computer Science, pages 538– 547. Springer, 2005.
[BAP06]
J´er´emy Buisson, Fran¸coise Andr´e, and Jean-Louis Pazat. Afpac: Enforcing consistency during the adaptation of a parallel component. Scalable Computing: Practice and Experience, 7(3):83–95, September 2006. electronic journal (http://www.scpe.org/). extended version of citeBuiAndPaz05ISPDC.
[BBC02]
Laurent Baduel, Fran¸coise Baude, and Denis Caromel. Efficient, Flexible, and Typed Group Communications in Java. In Joint ACM Java Grande - ISCOPE 2002 Conference, pages 28–36, Seattle, 2002. ACM Press. ISBN 1-58113-559-8.
[BBC+ 06] Laurent Baduel, Fran¸coise Baude, Denis Caromel, Arnaud Contes, Fabrice Huet, Matthieu Morel, and Romain Quilici. Grid Computing: Software Environments and Tools, chapter Programming, Deploying, Composing, for the Grid. Springer-Verlag, January 2006. [BCL+ 04] Eric Bruneton, Thierry Coupaye, Matthieu Leclercq, Vivien Qu´ema, and Jean-Bernard Stefani. An open component model and its support
52
BIBLIOGRAPHY
in java. In Ivica Crnkovic, Judith A. Stafford, Heinz W. Schmidt, and Kurt C. Wallnau, editors, CBSE, volume 3054 of Lecture Notes in Computer Science, pages 7–22. Springer, 2004. [BCM03] Francoise Baude, Denis Caromel, and Matthieu Morel. From distributed objects to hierarchical grid components. In International Symposium on Distributed Objects and Applications (DOA), Catania, Sicily, Italy, 3-7 November, Springer Verlag, 2003. Lecture Notes in Computer Science, LNCS. [BCS02]
E. Bruneton, T. Coupaye, and J. Stefani. Recursive and dynamic software composition with sharing. In WCOP’02, 2002.
[BCS04]
Eric Bruneton, Thierry Coupaye, and Jean-Bernard Stefani. The fractal component model, specification version 2.0-3. Technical report, The ObjectWeb Consortium, 2004.
[BDR06]
Matthias Baldauf, Schahram Dustdar, and Florian Rosenberg. A survey on context-aware systems. International Journal of Ad Hoc and Ubiquitous Computing (IJAHUC), forthcoming, 2006.
[Dav05]
Pierre-Charles David. D´eveloppement de composants Fractal adaptatifs : un langage d´edi´e `a l’aspect d’adaptation. Phd thesis, Universit´e ´ de Nantes / Ecole des Mines de Nantes, July 2005.
[DL05a]
Pierre-Charles David and Thomas Ledoux. Une approche par aspects pour le d´eveloppement de composants fractal adaptatifs. In 2`eme Journ´ee Francophone sur le D´eveloppement de Logiciels Par Aspects (JFDLPA 2005), pages 91–108, Lille, France, September 2005. Herm`es.
[DL05b]
Pierre-Charles David and Thomas Ledoux. Wildcat: a generic framework for context-aware applications. In Proceeding of MPAC’05, the 3rd International Workshop on Middleware for Pervasive and Ad-Hoc Computing, Grenoble, France, November 2005.
[DL06]
Pierre-Charles David and Thomas Ledoux. Safe dynamic reconfigurations of fractal architectures with fscript. In Proceedings of the 5th Fractal Workshop at ECOOP 2006, Nantes, France, July 2006.
[Ead84]
Peter Eades. A heuristic for graph drawing. In Congressus Numerantium, volume 42, pages 149–160, 1984.
[For04]
Michael Forster. Skript zur Vorlesung Zeichnen von Graphen, 2004.
[HK04]
David Harel and Yehuda Koren. Graph drawing by high-dimensional embedding. J. Graph Algorithms Appl., 8(2):195–214, 2004.
BIBLIOGRAPHY
53
[Kar72]
Richard M. Karp. Reducibility among combinatorial problems. In Complexity of Computer Computations. Plenum Press, 1972.
[LH05]
Oussama Laya¨ıda and Daniel Hagimont. Plasma: A componentbased framework for building self-adaptive applications. In Proc. SPIE/IS&T Symposium On Electronic Imaging, Conference on Embedded Multimedia Processing and Communications, San Jose, CA, USA, January 2005.
[Mac67]
J. MacQueen. Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical statistics and probability, volume 1, pages 281–297. University of California Press, 1967.
[RC02]
Barry Redmond and Vinny Cahill. Supporting unanticipated dynamic adaptation of application behaviour. In Boris Magnusson, editor, ECOOP, volume 2374 of Lecture Notes in Computer Science, pages 205–230. Springer, 2002.
[SGM02]
Clemens Szyperski, Dominik Gruntz, and Stephan Murer. Component Software. Beyond Object-Oriented Programming. AddisonWesley Professional, 2nd edition, November 2002.
[SP97]
Clemens Szyperski and C. Pfister. Component-oriented programming: WCOP’96 workshop report. In Max M¨ uhlh¨auser, editor, WCOP96, pages 127–130. dpunkt, Heidelberg, 1997.