Jim Dowling, Tilman Schäfer, Vinny Cahill, Peter Haraszti, Barry Redmond. Distributed Systems Group. Department of Computer Science. Trinity College Dublin.
Using Reflection to Support Dynamic Adaptation of System Software: A Case Study Driven Evaluation Jim Dowling, Tilman Schäfer, Vinny Cahill, Peter Haraszti, Barry Redmond Distributed Systems Group Department of Computer Science Trinity College Dublin {jpdowlin, schaefet, vinny.cahill, harasztp, redmondb} @cs.tcd.ie
Abstract. A number of researchers have recently suggested the use of reflection as a means of supporting dynamic adaptation of object-oriented software especially systems software including both middleware and operating systems. In order to evaluate the use of reflection in this context we have implemented a resource manager that can be adapted to use different resource management strategies on behalf of its clients using three distinct technologies: design patterns, dynamic link libraries, and reflection. In this paper we report on this experiment and compare the three approaches under performance, ability to separate functional code from code concerned with adaptation, and programming effort. We conclude that although the overhead of using reflection may be high, reflection offers significant advantages in terms of the ability to separate functional and adaptation code.
1
Introduction
The ability to design software that can be tailored, whether statically or dynamically, to the requirements of specific users presents an opportunity to optimise the performance of the software based on application-specific knowledge. This is particularly important in the context of systems software such as middleware and operating systems that are characterised by the need to support a wide range of applications, each with different requirements, simultaneously and on behalf of different users. In [Draves93], Draves identified a number of problems that can be addressed by making operating systems adaptable and similar problems arise in the context of other forms of system software: • feature deficiency: the operating system does not provide some feature required by the application; • performance: (some) operating system services do not provide performance that is acceptable to the application; • version skew: the application is dependent on a different version of the operating system for its correct operation.
153
OOPSLA’99 Workshop on Reflection and Software Engineering
Clearly, adaptability should allow operating systems to be more easily tailored to provide (only) the features required by the currently executing applications. It is perhaps not so obvious how adaptability might improve performance. The essential observation is that every software system embodies particular trade-offs concerning the way in which the services that it provides are implemented. Typically, in generalpurpose operating systems, these trade-offs are made to suit the requirements of what are perceived to be typical applications. The resulting trade-offs will certainly not be the right ones for every application resulting in sub-optimal performance for some. There is a substantial body of evidence showing that, for some applications at least, exploiting knowledge of the application in making these trade-offs can substantially improve performance. Kiczales et al. [KLL97] use the term mapping dilemma to refer to these trade-offs and characterise a mapping dilemma as a “crucial strategy issue whose resolution will invariably bias the performance of the resulting implementation”. Decisions as to how to resolve mapping dilemmas are called mapping decisions and a mapping conflict occurs when the application performs poorly as a result of an inappropriate mapping decision. Increased adaptability is intended to allow mapping decisions to be more easily made on an applicationspecific basis. Following this rationale, a number of designers of operating systems and middleware have proposed architectures for supporting dynamic adaptation in their systems [Bersh96], [HPM93]. For the most part these systems have been implemented from scratch in an ad hoc manner with little or no specific language support for dynamic adaptation. A number of other researchers have suggested using reflection as a "principled" [Blair99] approach to supporting dynamic adaptation particularly in the context of middleware [Blair99], [Camp99], [Ledoux97]. However, little practical experience exists on the relative merits and demerit of using reflection versus more traditional and ad hoc methods of supporting dynamic adaptation of system software. With this in mind, this paper reports on a case study driven evaluation of the use of reflection to support dynamic adaptation versus the use of design patterns and the more or less ad hoc use of dynamic link libraries (DLLs). While design patterns represent a best-practice approach to the structuring of an object-oriented system that is intended to be adaptable, DLLs represent a more traditional approach that is not specific to the use of object-orientation. Doubtless other approaches are possible. However, our intent was solely to get some real experience with the likely tradeoffs involved in choosing between different approaches. We choose three of many possible evaluation criteria. Firstly, we looked at the effect of the chosen approach on system performance as well as the cost of dynamic adaptation. We expected that supporting dynamic adaptation would imply some overhead that would hopefully be compensated for by the ability to choose more appropriate algorithms for a particular client or mix of clients. Our study only assesses the relative overhead due to the different approaches and not the benefits of supporting dynamic adaptation per se. In the long term, we are particularly interested in being able to retrofit support for dynamic adaptation to existing or legacy software as well as in what sort of language support for dynamic adaptation might be useful. To this end we were interested in the degree to which we could separate functional
154
Walter Cazzola, Robert J. Stroud, and Francesco Tisato Editors
code from code concerned with dynamic adaptation as well as in what effort is need to add support for dynamic adaptation to a system.
2
Dimensions of dynamic adaptation
In considering the inclusion of support for dynamic adaptation in a software system a wide variety of issues need to be addressed. These range from how adaptation is to be triggered, through how system consistency is maintained in the face of adaptation, through issues of authorisation and protection. We categorise these issues under eight dimensions:
1.
The interface dimension A key question concerns the design of the interface via which adaptation is triggered since this largely determines the kind of adaptations that can be supported. A typical system may provide an interface towards its clients allowing them to trigger adaptation either declaratively (e.g., based on the specification of Quality of Service parameter) or procedurally (e.g., by providing hooks to which client code can be bound). A system may also provide an interface towards its environment, allowing environmental changes to trigger adaptations (e.g., a change in network connectivity or bandwidth). Other issues related to the interface dimension include determining the scope [KLL97] of requested adaptations.
2.
The authorisation dimension A separate issue is the question of who is authorised to trigger an adaptation and when, along with the means for making this determination. 3.
The admission dimension Even if a requested adaptation is authorised, there remains the question of whether the adaptation is possible under the current system load taking into account the presence of concurrent applications and the availability of required resources. 4.
The extension dimension Given that an adaptation is to take place, another question is where does the required code come from? Typically, it will loaded from disk or over the network or may be provided directly as part of a client application. 5.
The safety dimension A related question concerns how system security is ensured in the face of loaded code. This is in part determined by the nature of the adaptation interface exposed by the system and may involve the use of mechanisms such as software sandboxing or hardware isolation of untrusted code that is provided by an application. It also addresses questions such as what execution rights this code may have. 155
OOPSLA’99 Workshop on Reflection and Software Engineering 6.
The binding model When a new component is loaded into a running system, there needs to be a mechanism to allow that code to be activated in response to client request. This may involve the use of a name or trading service or other discovery mechanism, or where a higher degree of transparency is required some redirection mechanism or the use of an anonymous (perhaps event-based) communication mechanism [Bersh96]. 7.
State transfer dimension Where an existing component is being replaced with a new or upgraded implementation, it will often be necessary to transfer state between the old and new components for which a standardised state transfer mechanism must be defined. 8.
Dependency management model The software components comprising a software system often have complex interdependencies. Adapting a single component may have implications for other components that need to be explicitly considered in performing adaptation [Camp98]. In our on-going work we are addressing each of these dimensions. We are particularly interested in the degree to which these issues can be tackled independently of the system to be adapted, the degree to which adaptation can be transparent to client applications (where appropriate) and the level of language support that might be useful in supporting dynamic adaptation. In this experiment we consider the impact of using design patterns, DLLs, and reflection in addressing the extension, binding and state transfer dimensions in the context of an adaptable resource manager.
3
The Buffer Manager Case Study
As a canonical (and easily understood) example of a resource manager we choose a memory allocator. Given a memory pool, i.e., a region of contiguous memory, the role of a memory allocator is simply to allocate non-overlapping subregions of the available memory to its client(s) on request. For historical reasons, we tend to refer to our memory allocator as "the buffer manager". The buffer manager supports two basic operations - allocate and release. The allocate operation locates a contiguous region of unused memory of a specified size in the memory pool and returns a pointer to that region. The release operation returns a region of allocated memory to the pool. Contiguous regions of unused memory are stored in a linear list called the free list. The objective of a buffer manager implementation is to minimise the execution times of the allocate and release operations, ideally with both operations running in near constant time. In implementing a buffer manager it is very difficult to predict its performance when subjected to a typical sequence of allocate and release operations generated by some application. A typical application calls the allocate operation periodically, uses the allocated memory for a certain amount of time, and then calls the release operation to free up the memory. Depending on the pattern of memory 156
Walter Cazzola, Robert J. Stroud, and Francesco Tisato Editors
usage by the application, it is difficult, if not impossible, to implement an allocation strategy that the buffer manager can use to minimise the execution times of allocate and release in all cases. Sequences of allocate and release operations are very much application-specific and, as such, a single allocation strategy will have to trade-off performance for generality. Examples of potential strategies for the buffer manager’s allocate and release operations are the first-fit, best-fit and worst-fit [Preiss97]. The first-fit allocation strategy always allocates storage in the first free area in the memory pool that is large enough to satisfy a request. The best-fit allocation strategy always allocates storage from the free area that most closely matches the requested size. The worst-fit allocation strategy always allocates storage from the largest free area. The choice of strategy dictates how the buffer manager keeps track of free memory, i.e., whether the list of free memory regions is ordered by size or address. For example, for fastest execution time with low memory usage, first-fit is the optimal allocation strategy, but under certain circumstances it may cause excessive fragmentation and best-fit would be a more appropriate strategy. Given these observations, we believe that the buffer manager is a good example of a manager of a shared resource that could benefit from dynamic adaptation depending on the mix of clients in the system at any time and their memory usage patterns. Moreover, the buffer manager exhibits the classic requirement to synchronise operations on behalf of possibly multiple clients although we note that an optimised implementation of the buffer manager could be used where there is only a single client in the system. Moreover, since the buffer manager necessarily maintains state describing the status of the memory pool, there may be a requirement to transfer that state between buffer manager implementations if and when dynamic adaptation takes place. In fact, we have considered three different usage scenarios for an adaptable buffer manager as follows: − the simplest implementation supports a single client allowing the client to change memory allocation strategy at will; − a multi-client buffer manager supports synchronisation between multiple clients any client can change the memory allocation strategy but all clients share the same strategy; − the final version supports per-client memory allocation strategies. Starting from a minimal single-client, non-adaptable buffer manager (section 2.1 below), we have implemented each of the scenarios described above using design patterns, DLLs, and a reflective language and assessed the impact of each implementation approach in terms of − performance overhead; − the degree of separation of concerns between the functional (i.e., implementation of allocate and release) and non-functional aspects (i.e., support for synchronisation and dynamic adaptation); − the programming effort required to add support for the non-functional aspects given the minimal implementation. 157
OOPSLA’99 Workshop on Reflection and Software Engineering
It should be noted that, where supported, adaptation is triggered by a metainterface that allows clients to request their desired buffer management strategy from among those available. While this interface is semantically the same in each of the usage scenarios, it may be differ syntactically. The minimal buffer manager In its minimal form, the public interface of the buffer manager provides operations for allocating and releasing regions of memory as depicted in Figure 1. Internally, the buffer manager maintains a linked list of free blocks of memory. The actual policy employed by the buffer manager for managing the memory is embedded in the implementation of its methods and cannot be changed. This minimal implementation could of course support multiple clients if the necessary synchronisation code were included in its methods; our implementation doesn’t.
BufferManager Client
-sentinel : FreeList* +allocate(in : size_t) +release(mem : void*)
Fig. 1. Minimal Buffer Manager
BufferManagerStrategy ARStrategy Client
+sentinel : FreeList* +theStrategy : ARInterface +allocate(in : size_t) +release(mem : void*) +changeStrategy()
+allocate() +release()
FirstFit
WorstFit
BestFit
Fig. 2. The Buffer Manager with the Strategy Pattern
Buffer manager implementation using design patterns The strategy pattern is an object-oriented design pattern [Gamma95] and has been used previously in the design of dynamically adaptable systems such as TAO [Schm97]. The strategy pattern describes how the buffer manager can delegate the implementation of its exported operations to methods in a replaceable strategy object. 158
Walter Cazzola, Robert J. Stroud, and Francesco Tisato Editors
In the case of the buffer manager, the strategy objects implement the methods in the ARStrategy interface shown in Figure 2. Strategy objects are able to access the buffer manager’s state (free list) by binding to a reference to that state. The strategy objects themselves are stateless. For the system programmer to provide multiple strategies requires that corresponding implementations of ARStrategy be provided and compiled into the system. As depicted in Figure 2., the strategy pattern also specifies an interface that allows clients to request a change of implementation strategy by giving the name of the required strategy as an argument. The changeStrategy operation represents a declarative meta-interface [KLL97] to the buffer manager. At any time, a client can use knowledge of its own memory access patterns to select the most appropriate allocation strategy from those available by invoking changeStrategy. changeStrategy operates by swapping the buffer manager’s old strategy object reference with a substitute reference to a new strategy object. When any client calls changeStrategy, the allocation strategy is changed for all clients of the buffer manager. Supporting synchronisation between multiple clients Synchronisation is clearly required to ensure that the buffer manager’s state does not become inconsistent under concurrent usage by many clients. Moreover, where dynamic adaptation is supported, any change in strategy should be both transparent to other clients as well as atomic. Other clients being serviced by the previous strategy object should be able to complete their execution while clients that issue requests while the strategy is being replaced should be blocked until the update completes. To synchronise access to the buffer manager’s state, synchronisation code has to be added to the allocate, release, and changeStrategy operations. Broadly speaking, there are two possible ways in which this synchronisation code can be added: 1.
One possible approach is to use a mutual exclusion lock to serialise all operations on the buffer manager. In this case code to acquire and release the lock can be inserted before and after the buffer manager dispatches the operations to the allocate, release, and changeStrategy methods. 2. An alternative approach might be to allow multiple clients to execute operations on the buffer manager concurrently with synchronisation taking place at a finer level of granularity. In this case the synchronisation code is embedded in the implementation of the allocate, release, and changeStrategy methods. For both options, we have to recompile and rebuild our buffer manager implementation to add synchronisation code. In the case of the second approach we would also need to keep track of how many clients are executing in the buffer manager in order to allow us to determine when changeStrategy can be safely executed. For simplicity, our implementation uses the first approach.
159
OOPSLA’99 Workshop on Reflection and Software Engineering Supporting per client allocation strategies Since different clients have different patterns of memory access and it is difficult to provide an optimal allocation strategy for all of them, it makes sense to allow each client have its own allocation strategy policy. Employing the abstract factory pattern [Gamma95], Figure 3, each client can request a client-specific buffer manager from a factory of type BMFactory. A BMFactory is a singleton factory object that supports the creation of client-specific buffer
BMFactory -sentinel : ChunksOfMem* -callbacks : BufferManager* +createBM() +changeStrategy() +allocateChunk() +releaseChunk()
BufferManager Client
-sentinel : FreeList** -singleton : BMFactory* +allocate() +release()
Fig. 3. Supporting per-client Buffer Managers with the Abstract Factory Pattern
managers as well as managing how the global memory pool is shared between these competing buffer managers. Two different implementation strategies are possible depending on whether or not the factory returns a minimal buffer manager or one that implements the strategy pattern, i.e., whether changing allocation strategy requires the client to bind to a different buffer manager or not. In the first case, clients request the creation of a buffer manager that provides a specific allocation strategy using the createBufferManager operation. The returned buffer manager object is then responsible for: • handling allocate and release operations from that client; • requesting (potentially non-contiguous) chunks of memory from the BMFactory if needed to service client requests. The buffer manager should be able to satisfy client allocate and release operations by allocating and releasing memory from the chunk(s) of memory it has been allocated by the BMFactory. If a buffer manager does not have a contiguous memory area available to satisfy a client’s allocate request, the buffer manager then calls the BMFactory allocateChunk operation to request a new chunk of free memory. This requires synchronised access to the BMFactory’s memory pool. The number of these requests for chunks of memory from competing buffer managers should be small compared to the number of requests from clients. Moreover, since chunks can be of the same size, a single policy can be chosen for the factory. 160
Walter Cazzola, Robert J. Stroud, and Francesco Tisato Editors If the client wants to change its allocation strategy, it calls the BMFactory changeStrategy operation, passing a reference to its old buffer manager and a string representing a new allocation strategy. When changing a client’s buffer manager object, the BMFactory does the following: 1. it locates an appropriate buffer manager object to implement the requested allocation strategy 2. creates the new buffer manager object 3. transfers the state from the old buffer manager object to its replacement 4. reorders the list of free memory areas if appropriate The most obvious way to transfer the state of the old buffer manager object to its replacement is to use a copy constructor. Where the buffer manager objects represent non-substitutable types, specialised mappings could also be employed.
BMFactory -sentinel : ChunksOfMem* -callbacks : BufferManager* +createBM() +allocateChunk() +releaseChunk()
BufferManagerStrategy ARStrategy Client
+sentinel : FreeList* +theStrategy : ARInterface +allocate(in : size_t) +release(mem : void*) +changeStrategy()
+allocate() +release()
Fig. 4. Supporting per-client Buffer Managers with the Abstract Factory and Strategy Patterns
An alternative implementation could have the BMFactory create per-client buffer manager objects that implement the strategy pattern. The buffer manager objects that implement the strategy pattern are now responsible for • handling the allocate and release operations from that client • requesting (potentially non-contiguous) chunks of memory from the BMFactory • changing the client’s allocation strategy This framework provides the same meta-interface to clients as the original strategy pattern based implementation, but still provides per-client buffer manager objects. Moreover, synchronisation is only required in the implementation of the BMFactory and not in the single-client buffer managers. However, this implementation will not perform as well as the previous implementation unless the strategy changes frequently.
161
OOPSLA’99 Workshop on Reflection and Software Engineering
Buffer manager implementation using DLLs An obvious limitation of designs based on the strategy pattern per se is that the set of supported policies must be predetermined. This limitation of the strategy pattern can be overcome using an ad hoc combination of dynamic linking, function pointers and customised method dispatching. Support for dynamic linking of client-specific strategies at runtime has been used before in dynamically adaptable systems such as SPIN [Bersh96] and 2K [Camp99]. In this design, calls to the buffer manager allocate and release operations use a level of indirection similar to the virtual function table in C++. The buffer manager’s allocate and release operations delegate their implementation to function pointers, referring to functions exported from a DLL. Since the functions in the DLL are
BufferManagerDll
Client
+sentinel : FreeList* -theDll : DllManager* -alloc : AllocFnPtr -release : releaseFnPtr -reorder : reorderFnPtr -theStrategy : char* +allocate() +release() +changeStrategy()
DllManager
+loadDll(name : char*) +unloadDll() +getFnPtr(fn : char*)
theStrategy.so exports the functions: {theStrategy}Allocate(); {theStrategy}Release(); {theStrategy}reorder();
Fig. 5. The Buffer Manager with DLL Loading of Strategies
mapped into the same address space as the buffer manager, they can bind to the buffer manager object (and its state) via a reference passed to the original object as an argument. Using the changeStrategy operation on the buffer manager, the allocate and release operations can be redirected to different functions. The new functions can be loaded at runtime from a different DLL specified by the client, while the old functions can either be left in memory or unloaded. This means that there is no limit on the number of potential allocation strategies a buffer manager can provide and more importantly that clients can develop client-specific allocation strategies as extension code (DLLs here) and link them in at runtime. In this sense, the client no longer has a declarative meta-interface to the buffer manager, but rather a procedural meta-interface instead.
162
Walter Cazzola, Robert J. Stroud, and Francesco Tisato Editors
Supporting synchronisation between multiple clients As for the strategy pattern, there are two possible places where synchronisation code can be added: 1. before and after the buffer manager dispatches the operations to the allocate, release, and changeStrategy implementation methods 2. in the implementation of the allocate, release and changeStrategy methods In either case we still have to recompile and rebuild our buffer manager implementation to add support for synchronisation. Even in the second case, were we to provide new implementations of the allocate and release functions with support for synchronisation from a DLL we would still need to support a reference count to ensure that changeStrategy can be executed atomically.
Supporting per client allocation strategies As with the strategy pattern, the use of DLLs can be combined with the BMFactory object to provide per-client buffer managers. The buffer manager objects are responsible for • handling the allocate and release operations from that client • requesting (potentially non-contiguous) chunks of memory from the BMFactory • changing the client’s allocation strategy using the DllManager object
Reflective implementation The reflective version of our dynamically adaptable buffer manager is implemented using the Iguana/C++ reflective programming language [Gow97]. Iguana allows the selective reification of language constructs such as object creation/deletion, method invocation, and state access. Important design goals of Iguana included support for transparent addition of reflection to existing software and the provision of language support for building dynamically adaptable systems. The implementation of Iguana/C++ is based on the use of a pre-processor: the application’s source code is augmented with meta-level directives that declare one or more metaobject protocols (MOPs) and make the association between classes or individual objects and particular MOPs. The annotated code is then translated into standard C++. In the case of the buffer manager, adaptation is introduced by reifying invocations on the original buffer manager class and by providing a meta-interface that allows the code of the allocate/release methods to be rebound to code provided at run-time, for example, in the form of a DLL. Using Iguana, the steps involved in making the buffer manager adaptive consist of : 1.
defining a MOP that reifies all structural information (class, method and attribute) and reifies method invocation; 163
OOPSLA’99 Workshop on Reflection and Software Engineering
2. 3.
associating the appropriate classes with that MOP (specifically classes BufferManager and Hole); and defining an extension protocol, i.e. a meta-interface, that allows clients to request that a different strategy be used by the buffer manager.
The syntax for defining a MOP in Iguana consists of the keyword protocol followed by an enumeration of the selected reification categories as outlined in Figure 6. Each reification category is followed by the name of a class implementing that reification category, for example objects of type DefaultInvocation reify Invocation. Classes are associated with a MOP using the selection operator (==>). protocol DefaultMOP { reify Class : MClass; reify Method : MMethod; reify Attribute : MAttribute; reify Invocation: DefaultInvocation; }; class Hole ==> DefaultMOP {..}; class BufferManager==> DefaultMOP {..}; class AdaptationProtocol { public: void changePolicy(Mobject *bufman, char *strategy); }; Fig. 6. Sample MOP definition and protocol selection in Iguana
An extension protocol is simply a class definition that encapsulates the necessary steps for carrying out meta-level computations, in this case switching to a new strategy. The purpose of an extension protocol is to separate meta-level code from the actual MOPs, allowing the same extension protocol to be used for multiple, compatible MOPs. In the case of AdaptationProtocol, the name of the new strategy is used to identify a DLL containing the object code. No further modifications to the source are necessary and the annotated code is then preprocessed into standard C++. When a client binds to a buffer manager object in the first place, it is provided with a default strategy, the strategy employed by the original buffer manager class. Invocation is not reified as long as the client does not request a different strategy, implying that the standard C++ invocation mechanism is used for calling methods. Only in the event that adaptation take place is invocation reified. We achieve this by inserting run-time checks into the application code, guarding all invocations to the buffer manager object. This has the advantage of reducing the overhead that is paid in the case that a client doesn’t use the adaptation mechanism. 164
Walter Cazzola, Robert J. Stroud, and Francesco Tisato Editors
New strategies can be provided on the fly by subclassing the annotated buffer manager class, redefining the allocate/release methods and by compiling the code into a DLL. Clients can now request a different strategy by simply invoking the meta-interface and providing the name of the DLL. It is worth mentioning that the original interface of the buffer manager class has not been altered, the additional functionality to support adaptation is completely encapsulated in the extension protocol and is orthogonal to the base-level program. The meta-level code of the extension protocol responsible for rebinding the allocate/release methods performs the following tasks: 1. open a DLL as specified by the strategy parameter; 2. rebind the code of the allocate / release methods; 3. transfer state (if necessary); 4. reify invocation for the client. Figure 7 shows the conceptual view of the resulting meta-level architecture. Invocations on the buffer manager are reified and diverted to the meta-level. Rebinding the methods is done by updating the method metaobjects: each method metaobject contains a function pointer that points to the actual function code. As we mentioned earlier, all invocations to the buffer manager object are guarded allowing the reification of method invocations to be switched on/off dynamically. Once new code is loaded from the DLL, all further invocations are trapped and redirected to the new implementation. MetaBM
Client
allocate release
update state
shared object file
BM
Fig. 7. Meta-level architecture allowing the dynamic rebinding of methods and state transfer
Supporting synchronisation between multiple clients Synchronisation can be achieved in Iguana by defining a MOP that intercedes in all method invocations on the buffer manager and executes the additional synchronisation code before and after a method gets executed. The corresponding meta-level architecture is shown in figure 8: calls are trapped by the buffer-manager’s class metaobject and synchronously dispatched to the allocate/release methods. By reifying creation, we are able to modify the semantics of object creation in that successive calls of the new operator always return a reference to a single, global buffer manager object. Switching to a new strategy in this scenario can only be 165
OOPSLA’99 Workshop on Reflection and Software Engineering
MetaBM
synchronized access allocate release
Client
BM Client
Fig. 8. Method calls are trapped and synchronised at the meta-level
done when there are no pending requests, i.e. the extension protocol needs to synchronise with the method dispatcher of the buffer manager. As a consequence, the MOP that implements synchronisation and the extension protocol are no longer independent from each other, but still independent of the base-level code.
Supporting per client allocation strategies Iguana distinguishes between local and shared metaobjects: local metaobjects are associated with a single instance of a (base-level) class whereas shared metaobjects are shared between all instances of a class. This feature is particularly useful in order to allow multiple clients to choose a strategy of their own preference without affecting competing clients. In this scenario, we allow a client to bind to an individual instance of the buffer-manager class with its own, local metaobjects. In other words, each buffer manager object now maintains a private set of metaobjects representing its class definition. This allows much more fine-grained adaptation to take place, affecting only individual objects rather than whole classes. A buffer manager object can now autonomously switch to a different strategy on behalf of its client using the same meta-interface and MOP as described before. We still have to link the per-client buffer managers to a global memory manager object and have to forward a request if necessary. We decided in this case to hard code the delegation mechanism into the allocate/release methods. An alternative would have been to reify state access for the class representing the free memory blocks (class Hole) and to delegate all accesses to the global memory manager. This can only be done by exploiting application specific knowledge (the meta-level code would no longer be application independent) and would also introduce significant run-time overhead.
166
Walter Cazzola, Robert J. Stroud, and Francesco Tisato Editors
4
Evaluation
As we explained in section 1, we were interested in evaluating the different approaches to supporting dynamic adaptation under three criteria: • Performance: what overhead does support for dynamic adaptation introduce and what is the cost of performing adaptation? • Separation of concerns: how far can the design and implementation of an adaptable system be separated from its application domain, i.e., to what extent is existing application code affected by adding support for dynamic adaptation? • Thinking adaptively - ease of programming: what effort is involved in adding support for dynamic adaptation and how well does the framework/language support programmers in thinking in terms of building adaptable systems?
4.1. Performance In this section, we compare how the different design strategies affect the performance of the running system. Specifically, we are interested in − the relative overhead incurred in handing client requests; − the cost of performing adaptation, i.e., switching between strategies. Experimental setup The measurements were conducted on a 350 MHz Pentium II PC with 64 MB of RAM running the Linux operating system. The programs are written in C++ and were compiled using GNU C++ 2.8. The reflective version also uses the Iguana/C++ preprocessor. Experiment 1 - Adaptation overhead Intent This experiment evaluates the cost of method dispatching for the different adaptation techniques and the relative overhead they introduce in a typical usage scenario. Description The cost of a null method call is measured to determine the relative overhead of method dispatching for the different adaptation approaches. The performance of the buffer manager under a simulated load is measured. The simulated load consists of a fixed number of time-stepped non-interleaved allocate and release operations with varying sizes. Results The overhead for a null method invocation is summarised in table 1. In Iguana, a reified method invocation costs about 12-times a C++ virtual function invocation. In table 2, the No Sync. column represents the case of a single client performing the allocate and release operations with no synchronisation code. In Iguana, we measured the overhead solely introduced by the additional run-time checks that guard all accesses to the buffer-manager object, and the costs of both run-time check and 167
OOPSLA’99 Workshop on Reflection and Software Engineering
reified method invocations. The former scenario represents the case when the system hasn’t been adapted yet and carries out the default strategy, the latter scenario represents the case after the system switched to a new strategy and invocations are redirected. The Sync. column represents the case where a synchronisation lock is acquired before method execution and released after completion. We only measured the effective time spent for a single client thus only measuring the overhead introduced by the synchronisation code. Evaluation As expected, using a reflective language for building an adaptable system introduces the most overhead to the application compared to the objectoriented implementations. Although the relative overhead for a reflective method invocation is high, its effective runtime overhead in the context of this experiment does not seriously impact on its performance. We expect that a real application that infrequently uses the buffer manager could yield enough performance gain from adapting its strategy to warrant its usage. Table 1. Relative null method call overhead Null method call
Relative execution time
C++ method invocation Strategy pattern DLL Iguana (check) Iguana (check + reified Invocation)
1 1.38 1.45 1.10 16.50
Table 2. Relative overall execution time Overall execution time C++ method invocation Strategy pattern DLL Iguana (check) Iguana (check + reified Invocation)
No Sync. 1 1.07 1.05 1.54 1.58
Sync. 1 1.09 1.06 N/A 1.64
Experiment 2 - Cost of adaptation Intent This experiment evaluates the cost of dynamically loading an adaptation strategy at runtime. Description The dynamic adaptation of the buffer manager’s allocation strategy requires the loading of new strategy code from a DLL at runtime and is triggered
168
Walter Cazzola, Robert J. Stroud, and Francesco Tisato Editors
when an application changes its pattern of memory usage. Application-specific knowledge is used to know when to change the strategy. Results The results are given in table 3 and show the absolute time spent (in ms) carrying out 3000 switches from best-fit to first-fit to worst-fit strategies. Evaluation The overhead of loading code from a DLL at runtime is made up primarily of time taken to swap the code from disk into memory and to map the code into the application’s address space. System calls to open the DLL and acquire references to its exported functions make up the rest of the overhead. Lazy relocation of symbols is used throughout the measurements. The costs of loading and unloading a DLL are largely dependent on the memory footprint of the DLL. In the strategy pattern version only two functions were compiled into the DLL, whereas in Iguana we compiled a complete class hierarchy (all classes are embedded into the Iguana meta-level class hierarchy) into object code resulting in a considerably larger DLL. When client-specific strategies are supported (rightmost column in table 3), the adaptation time is significantly faster as the DLL implementing the requested strategy might already be in use by a different client. The DLL only gets unloaded when there are no existing references to symbols in that DLL. Table 3. Loading code at runtime Technique
Single client
Iguana
2280
Multiple clients, single strategy 2150
DLL
840
840
Multiple clients, multiple strategies 1310 620
4.2. Separation of concerns The addition of the strategy pattern to the buffer manager to support dynamic adaptation necessitated the complete restructuring of the buffer manager class, thus leading to a tangling of the code that implements dynamic adaptation and the original code. Delegating the implementation of the buffer manager’s methods to a strategy object also had the undesirable side-effect of having to make its state visible to the strategy object, either by making the object’s state public or by using friends. The reflective version neither required the modification of the original buffer manager class nor did it have any undesired side-effects, apart from its impact on performance. The addition of the abstract factory pattern to support per-client buffer managers required changing the interface used to bind to a buffer manager. This change is not transparent to clients of the buffer manager and required rewriting the client code used to bind to the buffer manager. The reflective buffer manager overcomes this problem, however, due to its ability to intercede in object creation and re-direct the request to a meta-level factory object.
169
OOPSLA’99 Workshop on Reflection and Software Engineering
With the reflective programming language our experiences have shown that building a framework to support dynamic adaptation of an existing application can often be achieved independently from the application’s domain.
4.3. Ease of programming Here we compare the steps required by an application programmer to add the adaptation functionality to the buffer manager class, first using patterns, then with Iguana. Strategy Pattern 1. Rewrite the original class’ implementation and make its state visible to the strategy class; 2. Add the change strategy interface to the original class; 3. Write the different strategy classes. Iguana 1. Write a MOP that reifies method invocation; 2. Write a separate extension protocol for changing strategy by reifying method invocation; 3. Implement the different strategies as subclasses of the buffer manager. Abstract Factory Pattern: 1. Write the factory class and instantiate the factory object; 2. Rewrite the buffer manager class to request memory chunks from the factory object; 3. Rewrite all client code that is used to bind to a buffer manager. Iguana 1. Write a MOP that reifies object creation and implement the factory as a metaobject; 2. Write a subclass of the buffer manager class to request memory chunks from the factory. A problem with writing meta-level code in Iguana is that it is not always intuitive: concepts such as “reification” and “meta-object protocols” are inherently abstract and not targeted at solving everyday problems. However, a major benefit of using a reflective programming language is that there are many patterns of adaptation that can be encapsulated in meta-level code, even though some patterns require applicationspecific knowledge. For example, the strategy pattern can be encapsulated in a MOP, while the per-client buffer manager code requires a hybrid solution. We conclude that it therefore appears to be possible to build an adaptation framework that is orthogonal to its application domain, although this is likely to constrain the types of adaptation that are possible.
170
Walter Cazzola, Robert J. Stroud, and Francesco Tisato Editors
5
Summary and conclusion
Using a reflective programming language our experiences have shown that building a framework to support dynamic adaptation of an application can be achieved independently of the application’s domain. The addition of dynamic adaptation functionality to an application does not necessarily require changing the static class structure of the application. The additional functionality to support adaptation can be completely encapsulated in an extension protocol and is orthogonal to the application’s static class structure. Performance remains the main disadvantage of using a reflective programming language to build systems software. This aside, we believe that a reflective programming language adds valuable intercessive features required for designing and building adaptable systems, most notably the provision of a general infrastructure (consisting of a MOP and an extension protocol) that can be used to make existing software adaptable. Future work will include the identification and application of other design patterns to existing code using Iguana/C++, as well as ongoing work on implementing our reflective programming language more efficiently.
Acknowledgements The work is supported by Enterprise Ireland under Basic Research Grant SC/97/618. References [Bersh96]
[Blair99]
[Cahill98] [Camp93] [Camp98]
[Camp99]
[Draves93]
Brian Bershad, Przemyslaw Pardyak, et. Al., Language Support for Extensible Operating Systems, Workshop on Compiler Support for System Software, February 1996. Gordon Blair et Al., The Design of a Resource-Aware Reflective Middleware Architecture, In Proceedings of Meta-Level Architectures and Reflection ’99, pp. 115-134. Vinny Cahill, The Iguana Reflective Programming Model, Technical Report Dept. of Computer Science, Trinity College Dublin, 1998. Roy Campbell et al., Designing and Implementing Choices: an ObjectOriented System in C++, Communications of the ACM, Sept. 1993. Quarterware for Middleware Ashish Singhai, Aamod Sane, and Roy Campbell. 18th IEEE International Conference on Distributed Computing Systems (ICDCS 1998), pp. 192-201, May 1998. Fabio Kon, Roy Campbell and Manual Roman, Design and Implementation of Runtime Reflection in Communication Middleware: the dynamicTAO case, ICDCS'99 Workshop on Middleware. Richard P. Draves, The Case for Run-Time Replaceable Kernel Modules, In Procceedings of the 4th Workshop on Workstation Operating Systems, pp. 160-164, 1993. 171
OOPSLA’99 Workshop on Reflection and Software Engineering [Gamma95] Erich Gamma et. Al., Design Patterns: Elements of Reusable Object-
Oriented Software, Addison Wesley, 1995. Brendan Gowing and Vinny Cahill. Meta-object protocols for C++: the Iguana approach. In Proceedings of Reflection ’96, pages 137-152. [Gow97] Brendan Gowing, A Reflective Programming Model and Language for Dynamically Modifying Compiled Software, PhD Thesis, Dept. of Computer Science, Trinity College Dublin, 1997. [HPM93] Graham Hamilton, Michael Powell and James Mitchell, Subcontract: A Flexible Base for Distributed Programming, Technical Report for Sun Microsystems, April 1993 [Kiczal93] Gregor Kiczales, John Lamping, Chris Maeda, David Keppel and Dylan McNamee. The need for customisable operating systems. In Proceedings of the 4th Workshop on Workstation Operating Systems, pages 165-169. [KLL97] Gregor Kiczales, John Lamping, Christina Lopes, Chris Maeda, Anurag Mendhekar, Open Implementation Guidelines, 19th International Conference on Software Engineering (ICSE), ACM Press, May 1997. [Ledoux97] Thomas Ledoux, Implementing Proxy Objects in a Reflective ORB, CORBA Workshop ECOOP ’97. [MNCK98] Scott Mitchell, Hani Naguib, George Colouris and Tim Kinberg, Dynamically Reconfiguring Multimedia Components: A Model-based Approach, SIGOPS European Workshop on Support for Composing Distributed Applications 1998. [Preiss97] Bruno R. Preiss, Data Structures and Algorithms with Object-Oriented Design Patterns in C++, John Wiley & Sons, 1999. [Schm97] Douglas C. Schmidt and Chris Cleeland, Applying Patterns to Develop Extensible and Maintainable ORB Middleware, IEEE Communications Magazine Special Issue on Design Patterns, April 1999.
[GC96]
172