Trading and Distributed Application Management: An ... - CiteSeerX

31 downloads 0 Views 51KB Size Report
Trading and distributed application management are two research areas that ..... A client uses the trader to select an appropriate server. .... All measures were.
Appeared in: Proceedings of the Fifth IFIP/IEEE International Workshop on Distributed Systems: Operations & Management, Toulouse, Frankreich, 1994.

Trading and Distributed Application Management: An Integrated Approach Ernö Kovács, Stefan Wirag University of Stuttgart, Institute of Parallel and Distributed High-Performance Systems, Breitwiesenstr. 20-22, W-70565 Stuttgart, Germany, E-mail: {ernoe.kovacs, stefan.wirag}@informatik.uni-stuttgart.de

Abstract Trading and distributed application management are two research areas that are strongly related Trading is a supporting service for distributed systems that utilizes configuration data and current state information to mediate services. Distributed application management collects this information and controls the operation of the system. An integrated approach has to focus on the collaboration of both sides. This paper explores some of the arising problems. The use of configuration information, the notification of events, and access to dynamic state information requires carefully designed interaction interfaces to minimize the effects of communication and distribution on the mediation process. In the other direction, application management can steer the operation and performance of the distributed system through influencing the trading process. We describe our implementation for both approaches and present some exemplary performance measures.

1. Introduction The concept of a trader is well established in the research community ([ANSA91], [BeRa91] [ISO92]). A trader is an infrastructure component of a distributed system to which a server can export a service offer and from which a client can import a service. During import a client may request desired properties of the server - e.g. certain qualities or minimum cost. The trading functionality is strongly related to the management of distributed applications. For example, configuration knowledge, current state information and aggregated management information is utilized by the trader. Notifications of an inoperable server will disable its mediation. Therefore, management services are working tightly together with the trader and the seamless integration of both is crucial for the working of the system. Conversely, the trader’s decisions determine the course of events in the distributed system. Clients will establish bindings to selected servers. Servers will receive calls by clients expecting a certain quality of service. Steering the decisions of the trader can be a valuable instrument for a system administrator. As an example, disabling a service offer reduces the workload on this node and may prepare for future reconfiguration. In current approaches the trader mainly work on behalf of the service user. For management purposes system administrators need to influence the mediation process. Third, future management system will consist of management functions which are distributed throughout the system, and which offer management services that can be combined to control the large distributed systems of the future. In the same way as distributed applications use the trader, distributed management systems will employ the trading function to access offered management services or to create managing objects on specially provided execution servers ([JoDi92]). As this is just another application of the trader we will not discuss this in the following. The MELODY projects integrated the use of management services into the trader. This required some concepts to deal with communication delays, parallel activities, asynchronous event notification, dynamically required new management functions, and possible high network load. Furthermore, we enabled system administrators to add implicit and explicit selection rules to the trader, thus steering the activities of the distributed system.

The remaining of the paper is organized as follows. Section 2 mentioned related work. The following part establishes the normally used terminology and basic trading concepts. This will be followed by our model of the management of distributed applications. This model describes a distributed management system and its infrastructure to which the components of distributed application connect dynamically and where management functions are distributed throughout the system. Then we describe how the trading function is extended to use management services. We describe the arising problems and concepts to solve this problems. Afterwards, we show how system administrators may store explicit and implicit selection rules in the trader and how they influence the operation of the trader. We describe our implementation and show some performance results. At the end we summarize and give an outlook on further work.

2. Related Work The concept of trading originates from the work done in the ANSA project ([ANSA91]) although first ideas are already mentioned in [BiNe84]. Trading functionality of different forms is used in many systems (e.g. [RVM91]) and is under consideration to be the first ISO standard ([ISO92]) that is modelled after the Open Distributed Processing (ODP, [ISO91]) standards. The BERCIM project used the suit of OSI management protocols to access dynamic attributes. They reduce response time by checking only subsets of the available services, a policy they called probing ([BERC89]). A lot of work has already been done on the management of communication networks. SNMP ([CFSD90], [Rose91]) is widely accepted as an ad-hoc industrial standard for the management of network components. It is also widely accepted that SNMP is not suitable for systems management and application management ([MaSt93]). The suite of OSI management standards ([ISO89]) is recognized as a possible candidate, but lacks running implementation and also experience with the management of distributed applications. Other research like [MCWB91] or [DuCa93] focus on specific applications or environments. The Distributed Management Environment (DME, [OSF91]) is based on the CORBA model ([OMG91]) which seems to be a common basis for future distributed systems and their management, but also needs to be verified through implementations and experience. Future management systems will move from a centralized management approach to a distributed systems model where management functions may be added at any time and at any location to deal with the increasing amount of objects that are to be managed ([SJKJ93]).

3. Basic Trading Concepts Trading involves three partners, the client, the server and the trader. The server exports a service offer to the trader, announcing the availability, the type and properties of the offered service. A client imports a service offer from the trader. The trader selects suitable offers from its database based on the requested type and other selection criterion. The interaction between them can be modelled as shown in the following picture:

Context Context Space Space

Import

Client Client

Type Type Space Space

Trader Trader

Usage

Export

Server Server

A trader is build upon the following concepts: Context Space Service offers are stored in the traders database. This database is organized in a hierarchical structure called context space. This structure results in a tree with service offers as leaves. The context space is used for naming the service offers and to structure the service offers according to user requirements (e.g. specific administrative, organizational, geographical or other reasons). The structure of the context space is used during service lookup to restrict the amount of services to be compared. Context Region Specification (CRS) The context region specification determines regions of the context space to be searched during trading. Possible CRS expressions may specify a single context, a whole subtree or sequences of context regions, which are searched in ascending order until an appropriate service is found. Service Type Space Services are characterised by the type of their communication interface. The type defines the operations that are available and their signature. Service types are related in an is-subtype-of relationship. The subtype relationship indicates that the interface of the subtype offers the same operations with the same signature as the supertype. Services of a subtype may be used instead of servers of the supertype. The service types together with the subtype relation form a directed acyclic graph, called service type space. During service selection this graph determines the compatibility between service offer and requested service. Service Description The functionality of a service is described by its service type. Other properties of the service are described by service attributes. Attributes consist of an attribute name and an attribute value. The attribute name defines unambiguously the semantic of the attribute and defines also the data type of the attribute value. An attribute value may be a simple or a composite data type. Examples for attributes are: special capabilities of the server, performance properties, billing information, access rights, etc. Service Selection Expression (SSE) Service selection expressions are used during import to specify the required sort of service. They consists of a service type, a context region specification, a constraining condition, an optimization condition, and a control block. The service type determines the kind of service which is searched. Parameters in the control block allow the use of subtypes, expresses limits on the required query time or on the size of the result. The CRS determines the parts of the context space to be searched. The constraining condition is a boolean expression which declares the requirements that a service offer must fulfil to be considered for selection. The optimization condition gives an ordering criterion for the selected services in such a way that a best service may be found. Optimization conditions are usually functions over the values of service attributes. Trader Interface The trader interface consists of operations for the exporter (export, modify or withdraw of a service offer), for the importer (read, list, search, select of service offers) and for the trader management (create, rename and delete of contexts). SSE are used in the search operation which results in an (ordered) list of available services and in the select operation that selects a best service. Miscellaneous Other Concepts Traders are a crucial part of a distributed system and should therefore be implemented in a fault-tolerant way. One possibility is the replication of the trader and its data bases. Another possibility are federated or cooperative traders. In this case a trader is responsible for the services offered in a certain region of the network (a domain). Traders from different domains may work together to increase the basis of available services and to offer better trader service. Cooperation may range from simple merging the context

and type spaces over exporting and importing selected parts (federation, [BeRa91]) to runtime negotiation of cooperation conditions.

4. Management of Distributed Applications Distributed applications consist of components running on different machines in the network and working together to perform a common task. Clients and servers are examples of such components. To deal with the wide dispersion of components and the increasing amount of objects to be managed the management task must be decomposed into different layers and performed by management functions that are distributed throughout the system ([Mage93], [ACNM92]). The vertical decomposition of management functions into different layers depend on the span of control and the composition of the objects to be managed. There is still the horizontal decomposition of management function into different functional areas like configuration, fault or performance management. For distributed application management we divide between the component management (CM) that monitors and controls single components and a general component management layer (GAM) that controls the working of the application as a whole. For more complex systems spanning different organisations or being constructed from existing applications, there may be additional levels of management functions.

General General Application Application Management Management Component Component Management Management Infrastructure Infrastructure Services Services

System System Management Management

Both management layers require services from the system management of a single host and from the infrastructure component of distributed systems (e.g. the name service or the security service). Typical interactions between the different layers of management are of hierarchical nature, but may also involve complex interaction protocols (e.g some form of agreement or cooperation). For instance, software distribution, installation, and component start-up are requested by the general management functions and performed by the system management. Infrastructure services like a name space or the security service are checked by CM and GAM functions during error detection, for example by comparing the stored information to the information obtained from the components directly. Component Management (CM) The CM is responsible for managing a single component of a distributed system. Among the management functions provided are functions to configure a component, to monitor its state, to control operations, to manage the Quality-of-Service parameters, to collect performance and fault statistics, to recognize alarm conditions and faults, and to test a component. The following picture describes some data management aspects of the CM: Management Function

System Management

Repository Agent

Log

Agent Repository

Log

Component

CM management functions access a running component by the use of a management agent running on every system. They are responsible for locating management objects and the transfer of management re-

quests. This can be compared to the working of the CORBA object request broker ([OMG91]). During start-up, a component dynamically registers itself with the agent and provides access to management information stored in managed objects. Management requests are forwarded by the agents to the component. Management information is usually stored directly in the component, but additional management information can be accessed through the systems management (e.g. processing time, memory usage), may be stored in separate management repositories (e.g. for component setup or to survive the end of components), or in log files (e.g. information about recurrent activities). Management information is represented in a uniform object-oriented information model. The MOs are accessed in a location and access transparent way. They are named through the use of a common naming tree which also contains location information. General Application Management (GAM) General application management functions control and monitor the distributed application as a whole. For example, the availability of a single server (a component) differ from the availability of a service that can be performed by different servers. A GAM management function collects the availability information from each server using CM functions. The GAM management functions are executed on special dedicated server. They need highly available management repositories to store persistent management information (e.g. configuration information, performance measurement data). Management functions are implemented as managing objects (MngO) and are located on specialized object servers. This realizes the concept of “management by delegation” ([YGY91]) and enables management functions to be placed physically near the managed system. This divides load among different nodes and reduces network traffic. A trader could be used to locate an appropriate object server for a newly created MngO. MngO use the same communication mechanism provided by the management agents to access MOs and to communicate with other MngO. A MngO typically form a hierarchy where MngO at the lower levels computes values for new MOs which in turn are used by MngO above. MngO communicating with the end user form the highest level. MngO are contained in the normal naming tree of the management system. Their interactions are governed by explicit or implicit represented policies ([Mars93], [SJKJ93]). Management data is stored in separate management repositories which provide an efficient and general way to store management information. The information is protected against unauthorized access, can be requested over the network, may be replicated for performance reasons, etc. The following picture describes the architecture of the GAM: MF

MF

MD MF

OS

Management View

C

MR

MD = Management Data MF = Management Function/ Managing Object

C

C

OS

MR M

Service

MD

C MR OS M C

= = = =

C

Nodes

Management Repository Object Server Manager Node (with interface to a human user) Component Node

5. Integration of Trading and Distributed Application Management 5.1 Using Management Services in the Trader The trader uses management services to access configuration information, fault notifications and current state information. He is able to dynamic create new managed object which perform monitoring or control task (e.g. supervising the availability of a server). Configuration Information Each server announces its service offer together with the service attributes to the trader. In addition, the trader can use management services to inquire properties of the servers host, the network it is connected to, and the characteristics of the network connection between the server and a possible client. Fault Notification The trader uses the event filtering facilities of the management system to deliver fault events. The trader actively creates MngO that supervises servers. The managing object is located on an object server physically close to the observed application server. It checks the availability of the service in periodic intervals. Whenever a server fails, the corresponding supervising object notifies the trader which in turn sets the servers to inoperable. The service offer is reactivated either by a re-export of the offer or by the management function that had discovered the error. Availability of the service is a special attribute and may be used in a SSE. An inoperable offer remains in the trader but will not be selected. State Information Service attributes are either static (invariant over a certain time) or dynamic (varying over the time). The sources for static attributes is a local database which stores the information provided during the export of the service. Dynamic attribute are accessed through the use of the management system. Both kind of attributes are contained in the service offer. Dynamic attributes correspond to an attribute of a MO. During import the state information is accessed through the management system. As this may involve many of remote systems care must be taken to enable efficient access. The following concepts have been applied: Reduction of the Evaluation Tree The service selection expression can be transformed into an evaluation tree where leaves represent constants or service attributes and inner nodes represents boolean operations. This tree can be reduced by evaluating branches which contains only static attributes. Taking into account that an AND expression results to FALSE if one of its terms results to FALSE and that a OR expression gives TRUE if one of its terms results in TRUE may reduce the amount of dynamic attributes to be requested. Asynchronous Requests Request to the management system can be made synchronously or asynchronously. We first request each dynamic attribute asynchronously and then continue to reduce the evaluation tree for each incoming answer. Other evaluation strategies may be applied, e.g. evaluating one branch after the other.. Parallel Query For each service offer the reduced evaluation tree must be evaluated. We may perform each query in parallel by using one threads for each service offer. Accesses to the management system have been made thread-safe. Responses are delivered to the right thread. Service A

Service B

Service C

Thread B

Thread C

Evaluated Thread A

Caching and Consistency Predicates The management system implements a cache for MOs. The cache contains an inconsistent copy (a shadow) of the original MO. Cache updates are performed on each attribute individually. A managing application can declare its requirement upon the consistency of a shadow MO and its attributes. This statements are called consistency predicates. The requirement can be stated in terms of time (time predicates), requiring that the shadow attribute is not older than requested. Another form of consistency predicates regard versions of the attribute value, e.g. every second change (change predicate, versions predicate, delta predicate, threshold predicate, [Kova93]). Based on the declared predicates, the observation of the change rate of the attribute and the access rate of all managing applications on one node, appropriate access and caching strategy are selected. Possible strategies are: • sending change reports that indicate new values • accessing the original object after the cache has become invalid (direct access). Consistency predicates are stored with the service types, are set by a system administrator or declared during service export. In any case, the referenced MO is fetched into the local cache and the management infrastructure supervises the declared consistency predicates. 5.2 Controlling Service Selection in the Trader A client uses the trader to select an appropriate server. What server is appropriate is defined by the use of a service selection expression (SSE). The SSE is usually given as a parameter of the import operation. This approach requires that each client knows which SSE is appropriate for his goals. Changes in system configuration may result in changes of the SSE for optimal server selection. Such changes must be communicated to each client which is not an appropriate solution. Instead two kind of rules may be stored in the trader: Explicit Service Selection Rules (ESSR) ESSR introduce a level of indirection for the service selection expression. The stored ESSR can be referenced by name. They can be combined with a SSE during import operations. A system administrator may change the ESSR without notifying the clients. As an example, there can be a ESSR for selecting the server with the best response time, one that maximizes throughput of the system, and one for servers with minimal cost. There can even be a default SSEs for import operations that do not specify any selection criterion. Implicit Service Selection Rules (ISSR) The stored ISSR are implicit used during each service selection. Each service type can be augmented with an ISSR that will be always evaluated in addition to given SSE. Using ISSR the system administrator can enforce selection policies. For example, he can prohibit the use of a server, can declare maximum load factors, prohibit the use of server machines where interactive users are logged on, and many more.

6. Implementation The MELODY Management System The MELODY management system implements a distributed application management system. It consists of a management infrastructure, application components and management applications. Application components and management applications connect to the management agent at start-up. Management information is stored in management objects and accessed through the use of the management infrastructure. A general, location transparent naming scheme is build upon the DCE directory service and provides location information for each MO. The management infrastructure consists of the management agents, object servers, management repositories, and a system management process. The management agent accepts requests from managers, lo-

cate the requested MOs, and transfers the requests to the remote agent. The remote agent accesses the local data which are either contained directly in the application, or in local management repositories. In addition, log files and system management information are mapped dynamically to MOs and can be accessed in the usual way. The system management process provides access to system information about the local node. Management functions (in the form of managing objects) can be created dynamically on object servers and perform management tasks. They are grouped into component management and general application management functions. We have implemented some management tools (like a graphical user interface, a browsing application, a data gathering and analysing tool, and other) which uses this distributed management system to monitor and control distributed applications based on the DCE or the ONC programming environment. Trader The trader is divided into a trader server process (TSP) and a trader user agent (TUA). The TSP is implemented using the DCE RPC facility. The TUA consists of a set of library routines which are linked to programs that uses the trader. The TSP uses the management system to access dynamic information.

Trader

Read Attribute

MMA

Read Availability

Object Server

Managing Objects

MMA Test Availability

Import

MMA Client

Read Attribute

Server

A client imports a service from the trader, for example an execution service. Service selection is optimized by the use of a load factor. The trader builds the evaluation tree combining the SSE given in the request and any implizit or explizit SSE rule given. For all dynamic attributes the trader accesses the management agent (MMA) and requests the attributes for each service offer. The management system locates the server and accesses the desired load attribute. It locates the object server that contains the managing object for the availability of the server and retrieves the availability information. After that it evaluates the SSE for this particular server. After each SSE is evaluated, it select the best server available. In the case of failure, either the availability attribute or the unsuccessful access to the server attribute will notify the trader of the problem. In such case we omit this particular service offer. Trader and management system are implemented in C++ on RS/6000 workstations running AIX. The trader uses the DCE RPC services for communication with the client. The management system uses a private UDP-based communication services, although a standardized management protocol like OSI (or even SNMP) would be desirable. The offered communication services can be described as an subset of CMIS. The object servers contain a set of predefined managing object that could be instantiated during runtime. We are currently working on the dynamic loading of managing objects into object servers. Trader and object servers uses the DCE thread package to implement concurrent operations. The management agent is an event-driven, single threaded application. A special graphical user interface enables a system administrator to manipulate the explicit and implicit stored SSEs and to define consistency predicates for service attributes.

7. Performance Measures We now present some performance results regarding the access to state information. In the following figures we measured the performance of the trader. The x-axis shows the number of exported service offers. The different lines in the figure presents the number of dynamic attributes requested. All measures were take on idle machines and an only lightly loaded network. The first figure compares synchronous and asynchronous accesses to dynamic information. synchronouse

asynchronouse

8

8

1 Thread / Synchron

1 Thread / Asynchron

7

6

6 Vermittlungszeit

Vermittlungszeit mediation time

7

5 4 3

5 4 3

2

2

1

1

0

0

0

1

2 3 4 Anzahl Compile Server

5

6

0

1

number of compile server

2 3 4 Anzahl Compile Server

5

6

As can be expected, mediation time is reduced to nearly the half. The second figure adds parallel evaluation of different service offers. The slight variations of the curves can be related to the amount of parallel threads used (in this case three). synchronouse

asynchronouse 8

8 3 Threads / Synchron

7

6 Vermittlungszeit

6

mediation time Vermittlungszeit

3 Threads / Asynchron

7

5 4 3

5 4 3

2

2

1

1 0

0 1

0

2 3 4 Anzahl Compile Server

5

0

6

1

2 3 4 Anzahl Compile Server

5

6

number of compile server

The third figure shows the effects of using consistency predicates. Meaningful predicates were set for each dynamic attribute. In this case measurements were preceded by a warm-up phase for the cache. synchronouse

asynchronouse

8

8

1 Threads / Synchron

1 Threads / Asynchron

7

6

6 Vermittlungszeit

Vermittlungszeit

mediation time

7

5 4 3

5 4 3

2

2

1

1 0

0 0

1

2 3 4 Anzahl Compile Server

5

6

0

1

2 3 4 Anzahl Compile Server

5

6

number of compile server

8. Conclusion A trader is an important infrastructure component for future distributed systems. With an integrated approach for trading and distributed application management we enabled the use of configuration information, event notifications, and dynamic attributes by the trader. We presented some concepts to optimise

the access to dynamic information. We also introduce the concept of an object server where managing objects can be created dynamically. We believe that many other application that want to utilize management information need the same abstractions to efficiently access management information from different sources of the system. As an extension to the normal trader operation we added implicit selection criterions and trader maintained lists of selections rules. Both concepts are valuable possibilities for a system administrator to influence the operation of the distributed system. In the resulting trading system the rules that govern the service selection are stipulated by the trader user and the system administrator. This breaks up the normally used approach that gives total control over the selection rules to the service user. The trader can be further extended to perform resource reservation or service usage negotiation through the use of management services. In the case of failure the trader may interact with the management to restart failed servers or to reconfigure the overall system. Even for this extensions a tight integration of trader and management system is absolutely necessary. In our further work, we have to examine the scalability of our approach and explore other ways to optimise the accesses to dynamic attributes, for example by making a distributed query evaluation.

Bibliography: [ACNM92]

[ANSA91] [BeRa91] [BERC89] [BiNe84] [CFSD90] [DuCa93] [ISO89] [ISO91] [ISO92] [JoDi92]

[Kova93] [Mage93] [Mars93] [MaSt93] [MCWB91] [OMG91] [OSF91] [Rose91] [RVM91] [SJKJ93] [YGY91]

E.C. Anderson, B.E. Caram, N. Natarajan, R.J. Mydosh. A Distributed Architecture Framework for Telecommunications Network Operations. In IFIP/IEEE International Workshop on Distributed Systems: Operations & Management (DSOM’92), Munich, October 1992. ANSA. ANSAware 3.0 Implementation Manual. Manual RM.097.01, Architecture Projects Management Limited, February 1991. Mirion Bearman, Kerry Raymond. Federating Traders: an ODP Adventure. In Jan de Meer, Volker Heymer (Hrsg.), Proceedings of the International IFIP Workshop on Open Distributed Processing, 1991. BERCIM. BERCIM - Report on the second Milestone.Technical Report (in German),Berlin, Mai 1989. Andrew D. Birrell, Bruce Jay Nelson. Implementing Remote Procedure Calls. ACM Transactions on Computer Systems, 2(1):39–59, February 1984. Jeffrey D. Case, Mark S. Fedor, Martin L. Schoffstall, James R. Davin. A Simple Network Management Protocol (SNMP). Technischer Bericht RFC 1157, DDN Network Information Center, SRI International, 1990. Andrzej Duda, Jacques Cayuela. System Management in the GUIDE Distributed System. In Proceedings of the IEEE First International Workshop on Systems Management (IWSM’93), Los Angeles, April 1993. 2nd DP 10040: Information Processing Systems - Open Systems Interconnection - Systems Management Overview, 11 1989. ISO. Basic Reference Model of Open Distributed Processing. ISO/IEC JTC1/SC21/WG7, December 1991. ISO. Working Document on Topic 9.1 - ODP Trader. Working paper of the ISO/IEC JTC1/SC21/WG7: N7047, May 1992. Derick Joordaan, John Dinger. Requirements for an Infrastructure Supporting Management Applications. In IFIP/IEEE International Workshop on Distributed Systems: Operations & Management (DSOM’92), Munich, October 1992. Ernö Kovacs. Automatic Selection of an Update Strategy for Management Data. In Proceedings of the IEEE First International Workshop on Systems Management (IWSM’93), Los Angeles, April 1993. Thomas Magedanz. TMN-Based Management of Intelligent Networks - Medium-term and Long-term Perspectives. In Proceedings of the 4th Workshop on Future Trends in Distributed Systems. IEEE, 1993. Lindsay F. Marshall. Representing Management Policy Using Contract Objects. In IEEE First International Workshop On Systems Management, Los Angeles, April 1993. IEEE. A. Marx, G. Stiege. An Architecture for Integrated System Management of UNIX Systems. In Proceedings of the IEEE First International Workshop on Systems Management (IWSM’93), Los Angeles, April 1993. Keith Marzullo, Robert Cooper, Mark D. Wood, Kenneth P. Birman. Tools for Distributed Application Management. COMPUTER, August 1991. OMG. The Common Object Request Broker: Architecture And Specification. Technical Report 91.12.1, Object Management Group, December 1991. OSF. The OSF Distributed Management Environment - A White Paper. Technical Report, OSF, January 1991. Marshall T. Rose. The simple book : an introduction to management of TCP-IP -based internets. Prentice-Hall series in innovative technology. Prentice Hall, Englewood Cliffs, NJ, 1991. XXIX, 347 S. R.Popescu-Zeletin, V.Tschammer, M.Tschichholz. ’Y’ distributed application platform. Computer Communications, 14(6):366–374, july/august 1991. M. Sloman, J.Magee, K.Twidle, J.Kramer. An Architecture For Managing Distributed Systems. In Proceedings of the 4th Workshop on Future Trends in Distributed Systems, S. 40–46. IEEE, 1993. Yechiam Yemini, German Goldszmidt, Shaula Yemini. Network Management by Delegation. In Integrated Network Management,II, S. 95–107. IFIP, April 1991.

Appeared in: Proceedings of the Fifth IFIP/IEEE International Workshop on Distributed Systems: Operations & Management, Toulouse, Frankreich, 1994.

Suggest Documents