paper describes SIMPLE, a tool environment for performance evaluation, modeling and ... recorded by other monitor systems like network and logic analyzers or ...
Performance Evaluation of Parallel Programs in Parallel and Distributed Systems Bernd Mohr Universität Erlangen-Nürnberg, IMMD 7, Martensstraße 3 D-8520 Erlangen, Federal Republic of Germany
Abstract. This paper deals with performance evaluation of parallel and distributed systems based on monitoring of concurrent interdependent activities. First a model is introduced for describing the dynamic behavior of computer systems in terms of events. Then, a distributed hardware/hybrid monitor system based on event driven monitoring and its tool environment SIMPLE are presented. We emphasize the tool environment as a prerequisite for successful performance evaluation. The tool environment for evaluating event traces, which integrates the data access interface TDL/POET and a set of evaluation tools for processing the data, makes evaluation independent of the monitor device(s) and the object system. It provides a problem oriented way of accessing event traces.
1. Introduction The characteristic feature of parallel and distributed computer systems is that they share load and common resources among several processing nodes in order to increase performance and reliability of the overall system. Understanding the how and why of an achieved performance in an existing system is a must for the improved design of new systems. To gain the necessary insight, we monitor the real-time sequences of interesting activities in the system under investigation (the object system) and make their interactions visible. This results in an event trace which can be used to describe and reconstruct system activities and the dynamic behavior of the object system. In most cases, this approach results in excellent explanations of why the system behaves the way it does, and in valuable hints for system tuning and for the design of further new computer systems. This paper presents a distributed monitor system for hardware and hybrid monitoring of arbitrary parallel or distributed systems. Its master-slave architecture uses one master for evaluation and measurement control and several distributed monitor agents for gathering event traces. Emphasis is put on the description of the tool environment for performance evaluation and its installation in the central control and evaluation computer of the system. First, we introduce the underlying model for describing the dynamic behavior of (parallel programs on) parallel and distributed systems. The second part gives a brief description of the hardware structure of our monitor system, the ZM4. The main part of the paper describes SIMPLE, a tool environment for performance evaluation, modeling and visualization of monitored event traces. All tools of SIMPLE use the data access interface TDL/POET to describe and access the measured data (event traces).
to be presented at VAPP IV / CONPAR 90 Zürich, 10.-13. Sep. 1990
Page 1
TDL/POET can decode measured data of arbitrary structure, format and representation. Therefore, it is also possible to analyze measured data with SIMPLE which have been recorded by other monitor systems like network and logic analyzers or software monitors or which have been recorded using sampling techniques.
2. Behavioral abstraction - the underlying model Our view on parallel and distributed systems is called behavioral abstraction. This term was introduced by Bates/Wileden [1]. The monitoring of complex parallel and distributed systems requires the ability to observe particular aspects of the system’s activity from a suitably abstract perspective. Such selective observation permits the user to focus on suspected problem areas without being overwhelmed by the considerable volume of details present in all of the system’s activities. Behavioral abstraction is based upon viewing a system’s activity as consisting of a stream of points of interest, the so-called events, representing significant points of the system’s behavior. In defining the events the user specifies his level of abstract view of the object system. But what are "points of interest"? Often, events are defined in terms of an automata based model as state changes in that model, but this definition fails in parallel and distributed systems due to the lack of a global state. As in CSP, we see an event as an atomic action which has no duration. To describe an activity of the monitored system which takes a certain period of time, one has to define a start- and an end-event. The selection of the events depends on the system and application monitored and the problem to be analyzed with the measurement. A set of attributes is assigned to each event. There are primitive or independent attributes, which are assigned to all events, and dependent attributes, which are assigned depending on the event type. Independent attributes are values like the time the event has been recognized and acquired, the acquisition time, or the location in the system, where the event occurred. This can be a processor identification or a hardware address. Sometimes the location is implicitly defined by the event type. A dependent attribute can be for example the process identification if it is a "create process" event. An event together with its attributes completely describes what occurred when and where in the system. A stream of events sorted by increasing acquisition time is called an (event) trace. Such a trace describes the dynamic behavior of the monitored system completely and sufficiently and therefore can be used to reconstruct it.
3. ZM4 - a distributed monitor system As stated in the last section, the dynamic behavior of computer systems can be described by event traces. But how can we get such event traces in a parallel or distributed computer system, where we have to record many (parallel) and independent event streams and establish a global sight on these separate but common traces? For this purpose we developed the distributed hardware and hybrid monitor ZM4* [7]. There are similar approaches in Zürich [4], Kaiserslautern [6] and in Karlsruhe [16]. * ZM4 stands for the German name "Zählmonitor 4" which means counting monitor 4.
Page 2
The ZM4 is able to monitor parallel and spatially distributed (up to 1 km) computer systems. It is the fourth generation of computer system monitors designed and implemented at the University of Erlangen, whose capabilities now go far beyond the simple counting of events available in the first generation when the name was invented. In order to record at spatially separated locations and to enable evaluating the independently recorded event traces in a global context, it provides a global monitor timebase (tick) with a resolution of 400 ns (local 100 ns). OBJ1
OBJ2
OBJ3
OBJ4
...
OBJi
OBJj object system ZM4
D P U1
... MA1
D P U4
tick channel
D P U1
...
... MAm
D P U4
M T G MAn
data channel
CEC
Figure 1. ZM4 hardware architecture As shown in fig. 1, the ZM4 is a master/slave configuration of one central control and evaluation computer (CEC), a variable number of distributed monitor agents (MA), and a monitor network consisting of a data and a tick channel. The monitor infra-structure is as far as possible built from standard components: the CEC is a UNIX minicomputer or workstation, and the monitor agents are personal computers (IBM-AT). The data channel, which is used to transfer commands, parameters and the measured data between the master and the slaves, is an Ethernet with TCP/IP protocol. The tick channel is used to synchronize the local clocks of the DPU’s to the master clock on the measure tick generator MTG. The actual recognition and recording of events is done by the dedicated probe units DPU. A DPU can record up to 4 event streams simultaneously and stores them together with the global timestamp in a 32 K word high speed FIFO buffer. So, the MA can simultaneously transfer the recorded data to its local disc. The MTG and DPUs are implemented as printed circuit boards which can be plugged into the monitor agents and are our own development. Each monitor agent can have up to 4 DPUs and can therefore monitor up to 16 nodes (OBJ) of the object system. The DPU and the MTG are described in detail in [8]. After starting the measurement from the CEC, the monitor agents work autonomously in transferring the buffered event traces from the DPUs to the monitor agent’s disk. At the end of the measurement, the event traces are transferred to the CEC and evaluated there. The evaluation system is described in the next chapter.
Page 3
4. SIMPLE - a performance evaluation tool environment SIMPLE is the performance evaluation tool environment designed and implemented for the central evaluation computer of the ZM4. The name SIMPLE indicates that it is easy to use. The letters SIMPLE stand for Source related and Integrated Multiprocessor and computer Performance evaluation, modeLing and visualization Environment. SIMPLE has a modular structure and standardized interfaces, so that tools and programs, which were developed and implemented by others, can be integrated into SIMPLE very easily. 4.1
The concept for a general logical structure of measured data - the base for independence of measurement and evaluation
The design and implementation of such an evaluation system for measured data is too complex and expensive a task that it should be done for one special object system or monitor system only. But if the evaluation system shall be able to handle data produced by monitoring of arbitrary parallel and distributed computer systems, generally the following requirements have to be made: • monitor independence: Because of the great variety of parallel and distributed computer systems and applications, it is necessary to use different monitoring techniques (hardware, software, firmware or hybrid monitoring) and different monitoring methods (time driven (sampling) or event driven monitoring). But how can measured data, recorded by different monitor devices and therefore usually differently structured, formatted and represented, be accessed in a uniform way? • source reference: How can the compressed and usually coded measured data be referred back to the problem oriented identifiers of (hard- and software) objects of the monitored system? • object system independence: The characteristic of parallel and distributed systems is that they consist of several processing nodes which are connected by some sort of communication system. But there are many differences in structure and function of the single nodes and in the configuration of the connection system. There are a variety of operating systems and applications. How can an evaluation system be applicable to differently configured computer systems with a wide variety of functions? In order to solve these problems, we have to look at the measured data, because the recorded data is what the evaluation system sees of the monitored system. All problems mentioned have an effect on the structure, format, representation and meaning of the measured data. If we can find a general logical structure for all the different types of measured data, we are able to abstract from the physical properties of the data. The logical structure can then be used to standardize the access of the evaluation system to the measured data. As already stated we are interested in streams of events because they describe the dynamic behavior of the monitored system. In order to get these desired streams of events we have to use event driven monitoring. This means that whenever the monitor device recognizes an event, it stores a data record describing the occurred event. Therefore, we call such a data record event record or E-record for short. It should contain
Page 4
the information what happened when and where. An E-record consists of several components, called record fields, each containing a single value describing one aspect of the event occurred. In most cases an E-record has record fields containing the event identification and the time the event was recognized. It is possible that one record field or a group of record fields is not always stored in the current E-record or that a record field is interpreted differently, depending on the actual value of another record field. Therefore it is possible that the E-records have different lengths even in one event trace. E-record fields can be classified in different field types. There are four basic types of Erecord fields which contain the actual information about the state of the monitored system, the actual measured data: TOKEN Record fields of type token contain only one value of a fixed and well defined range of constant values. A token record field is a construction similar to the enumeration types in the usual programming languages. They can be used to describe encoded information like event or processor identifications. Each value has a special, fixed meaning called interpretation. FLAGS Record fields of type flags are like token record fields but they can contain more than one value of a fixed well defined range. This is done by encoding the individual values as bits which are set or not set. Similar to token values, each bit set or not set can have a special meaning also called interpretation. TIME
Record fields of type time are used to describe timing information contained in an E-record. Timing information in context with monitoring computer systems are usually contents of special counting register (clock) of the monitor device or of a computer which is periodically incremented after a predefined period of time, the so-called resolution. Thus, the counting register contains the number of periods (or ticks), which is a measure for the time elapsed since the start of the clock. This timing information can be of arbitrary resolution and mode (point in time or distance from previous time value).
DATA
Record fields of type data contain in most cases the value of a variable of the monitored software or the contents of a register of the object system. They can be compared with variables in programming languages. It is only specified how to interpret their value. This format specification can be a simple data type like INTEGER, UNSIGNED or STRING.
Additionally, there are other types of E-record fields which are only relevant to the decoding system like record length fields, which contain the length of the current or previous E-record, or filler, which contain irrelevant or uninteresting data, like blank fields to secure that all E-records have the same length, or checksums. Now, if during the measurement one stores the event records continually in a file, one gets a sequence of E-records sorted according to increasing time. Such a sequence is called event trace (file). But sometimes, some E-records can be lost, because the recording buffer of the monitor device is overflowed or the monitor device is not able to record events and transfer the recorded data in parallel. So one gets an incomplete event trace. A section in the event trace which has been continuously recorded is called a trace segment. A trace segment describes a completely observed time interval of the
Page 5
dynamic behavior of the monitored system. The knowledge of segment borders is important, especially for validation tools based on event traces. It is possible that each trace segment begins with a special data record, the so-called segment header, which contains some useful information about the following segment, or is simply there to mark the beginning of a new trace segment.
event trace trace segment segment header E-record record field
...
E-record
.. .
When monitoring parallel or distributed systems another problem arises: If the local event traces are segmented and we merge them to a global event trace, we may get sections, where all E-records of one or more monitored objects are missing, but we have all E-records of the other object nodes. Such a section is called a local trace segment (LS). Sections where all E-records of all monitored objects could have been recorded are called global trace segments (GS). An example situation is shown in fig. 3.
E-record
.. . trace segment
Figure 2. General event trace structure With the hierarchy event trace / trace segment / E-record / record field we have a general logical structure which enables us to abstract from the physical structure and representation of the measured data and to get a relation between measurement and modeling. An E-record with its fields represents an event with its assigned attributes and the event trace file the dynamic behavior expressed in streams of events.
local trace1 local trace2 global trace
LS
GS
LS
LS time
Figure 3. Global and local trace segments
Page 6
4.2
TDL/POET - a basic tool for accessing measured data
Based on the logical structure introduced in the last section, we designed and implemented the TDL/POET tool in order to meet the requirements listed in section 4.1. The basic idea is to consider the measured data a generic abstract data structure or an object as in object oriented programming languages. The evaluation system can access the measured data only via a uniform and standardized set of generic procedures. Using these procedures, an evaluation system is able to abstract from different data formats and representations and thus becomes independent of the used monitor device(s) and of the monitored object systems. The tool consists of two components as shown in fig. 4:
TDL description
event trace file
TDLC
access key
POET
evaluation programs
Figure 4. Data handling with TDL/POET • POET (Problem Oriented Event Trace interface): The POET library is a simple and monitor independent function interface which allows the user to access measured data stored in event traces in a problem oriented manner. In order to be able to access and decode the different measured data the POET functions use a so-called access key file, which contains a complete description of formats and properties of the measured data. In addition to describing data formats and representation of the single values, the access key file includes the user defined (problem oriented) identifiers for the recorded values, thus allowing the demanded source reference. There is a great variety of POET functions: There are for example functions to process the E-records in an event trace in any desired order. It is possible to process the E-records in an event trace in the order they have been recorded ("get_next"), or to move the current decoding position in the event trace relative ("forward / backward") or absolute ("goto") to a desired Erecord. For each type of E-record fields POET provides an efficient and representation independent way of getting the decoded values of a certain E-record field ("get_token / get_time / ..."). POET also provides a user friendly way of handling time values ("set_resolution / print_time / ..."). • TDL (event Trace Description Language): In order to make the construction of the access key more user friendly, we developed the language TDL which is especially well suited for a problem oriented description of event traces. The access key is then produced by a TDL compiler TDLC which syntactically and semantically checks the user written TDL description. The development of TDL had two principal aims: The first was to make a language available which clearly and naturally reflects the fundamental structure of an event trace. The second was that even a user not familiar with all details of the language should be able to read and understand a given TDL description. Therefore, TDL is largely adapted to the English language. The notation of syntactic elements of the language and the general structure of a TDL description are
Page 7
closely related to similar constructs in the programming languages PASCAL and C. In writing an event trace description in TDL one provides at the same time a documentation of the performed measurement. The monitor independence enables us to analyze measured data with SIMPLE which were recorded by other monitor systems like network and logic analyzers, software monitors or even traces generated by simulation tools. We are independent of all properties of a object system, especially of its operating system and the programming languages used. In order to adapt our environment to another kind of measurement, one only has to write a TDL description of the event trace to be analyzed. Being independent of the object system and the monitor device(s), the TDL/POET interface inherently has another advantage: As it provides a uniform interface, the evaluation of measured data is independent of their recording. This enabled us to design and implement the tool environment SIMPLE in parallel to the design of our distributed monitor system ZM4. The tool TDL/POET is implemented under the operating system UNIX and in the programming language C. A prototype was designed and implemented in 1987. The growing interest and two years of experience in the use of the tool led to a complete redesign and reimplementation of the language and the related tools in 1989. The now available version 5.1 is much faster and provides more functions than the prototype [10]. 4.3
The performance evaluation tools of SIMPLE
Performance evaluation of measured data, especially in large projects, can only be done when a powerful set of tools is provided. In this section, we give a short description of a comprehensive set of performance evaluation and modeling tools making up the SIMPLE environment. They all are based on the data access interface TDL/POET and therefore independent of monitor and object system. A view of the SIMPLE concept and its essential tools is shown in fig. 5. We will introduce the single tools guided by a walk through the main steps of processing measured data. There are five categories of activities which contribute to solving the performance evaluation problem: generating an integrated view (I), validating traces (II), accessing traces (III), evaluating traces (IV) and general measurement support (V). The first two steps are to prepare the recorded event traces for the actual performance evaluation: I
Generation of an integrated view of the whole object system: The first step when analyzing the recorded data of a new measurement is to generate a global event trace in order to have an integrated view on the whole object system. It is necessary to have such an integrated view in order to detect and evaluate the interactions between the interdependent activities of the local object nodes. This task is done by the tool MERGE. It takes the local event trace files (trace) and the corresponding access key files (key) as input and generates the global event trace (systrace) and the corresponding access key (syskey). The E-records of the local event traces are sorted according to increasing time. This can be easily done when a monitor system was used, which provides a global timebase, like the ZM4. In all other cases it is necessary that there are other means for ordering events, e.g. to use local clocks and record periodically global events in all local event traces. In comparing their acquisition time values, it is possible to adjust the local clock values. But normally in this case only a rough approximation of the global time can be achieved.
Page 8
II Validation and plausibility tests: The next step is to perform some validation checks on the recorded event trace in order to test whether all used monitor devices have worked correctly and the measurement was performed without errors. The program CHECKTRACE performs some simple tests on the given event trace which can be applied to all event traces; e.g. it is checked whether the E-records are correctly sorted according to increasing acquisition time or whether the token fields contain
I
key1 trace1
V AICOS
syskey
.. .
MERGE
systrace
POETCOMP
keyn tracen
ZM4-ADMIN
key trace
II
assertions
activities definitions
filter rules
if validation ok VARUS
CHECKTRACE
report
report
FILTER
ADAR
report
III events
IV LIST
trace protocol
S-POET Interface
MEDA
interactive data analysis and graphics package
modeling tools
S
GSPN SPASS PEPP, HIT QNAP2
trace visualization tools SMART VISIMON
Figure 5. Overview of SIMPLE tools
Page 9
only defined token values. For more detailed and application related validation checks the tool VARUS (VAlidating RUles checking System) was designed. The user can specify some validation rules specific to the measurement and object system in a formal language (assertions) which are tested on the given event trace. Both tools generate a report which contains all detected errors. III Accessing event traces: Sometimes only a restricted view on the measured data is necessary. Such a viewpoint is defined by filtering and clustering events from the monitored event trace. In SIMPLE, both functions are supported by tools: • Selection (Filtering): With the tool FILTER it is possible to select E-records depending on the values of their record fields. The tool is implemented as an additional function of the POET library, which can be used to move the current decoding position in the event trace to the E-record which matches the given restrictions. These filter rules can be specified in a formal language. • Clustering: Often, interesting aspects in the behavior of the monitored system are determined by a sequence of events. We call such sequences activities. The user can define an activity by a regular expression of events or previously defined activities and assign new attributes to the activity in a formal language. The tool ADAR (Activity Definition And Recognition system) reads this definition file and processes an event trace in order to find the occurrences of the defined activities and computes their attributes. This tool can be used for different purposes. First, it can be used to automatically compute some global performance indices from the measured data. Second, through the comparison of the monitored behavior of the system’s activity with the specification of the expected system behavior defined by the hierarchy of the activities, the user gets hints to identify sources of errors and performance loss in the system. Beyond that, the activities and their computed attributes can be transferred to the data analysis package S (see below) and used for further interactive evaluation. IV Evaluating event traces: The simplest form of analyzing measured data is the generation of a user readable trace protocol, which can be done by the program LIST. It is quite useful, but normally this results in an enormous stack of printed paper and no one has the time and the persistence to read and analyze the printed data. The user wants to analyze the measured data interactively, with graphics support and in a high-level environment. The actual performance evaluation tools provided by SIMPLE can be divided into three classes according to their function: • Interactive data analysis and static graphics: For this task we integrated the commercial data analysis and graphics package S from AT&T [2] in SIMPLE. S provides a high-level programming language for data analysis and graphical evaluations like histograms or piecharts. We extended the S package with some additional functions to access the data of event traces (S-POET interface) and a function for plotting time-state diagrams (Gantt diagrams). Mainly, we use S first to compute statistical indices and their distributions and second to easily select interesting sections of the traces. • Integration of measurement and modeling: Modeling tools can be used to get predictions about not (yet) available systems (configurations). But it is necessary to use realistic parameters and validate the models by measurements [11].
Page 10
Empirical distributions, computed with the S package, can be approximated by analytical distributions, e.g. with the program MEDA [15]. We are about to gain experience with the modeling tools GSPN [5], SPASS [12], PEPP [9], QNAP2 [13] and HIT [3]. • Dynamic trace visualization or execution animation: The visualization of an event trace presents the monitored dynamic behavior in a speed, which can be followed by the human user, exposing properties of the program or system that might otherwise be difficult to understand or might even remain unnoticed. There is a simple visualization program SMART (Slow Motion Animated Review of Traces), which can be used on any ASCII-terminal and VISIMON, which has enhanced graphics capabilities and is based on X-Windows. V
General measurement support: And at last, there are some tools which are not for the actual performance evaluation of the data, but which support the user in various tasks during a measurement: • Computer aided instrumentation: If one uses software or hybrid monitoring, it is necessary to include special monitoring statements in the object software, which implement the event recognition and recording. This process is called instrumentation and should be based on the functional model of the monitored application. The tool AICOS (Automatic Instrumentation of C Object Software) provides an automatic instrumentation of procedures, procedure calls or arbitrary statements in object software written in the programming language C. • Time optimized POET functions: For on-line evaluations or time critical applications like animations it may be possible that the general purpose POET functions are too slow to be applicable. The program POETCOMP automatically generates functions in the programming language C, which provide the same interface as POET, but which are optimized for the desired type of measured data from the corresponding access key file. • Data administration: For each measurement one gets a lot of related files. For the administration of all these files we designed and implemented the program ZM4-ADMIN. The tool is based on the UNIX filesystem and provides a menu driven user interface. It classifies all files in the hierarchy project/experiment/measurement and stores additional information like date, reason and experimenter of a measurement.
5. Conclusion Both the distributed monitor system ZM4 and its performance evaluation tool environment SIMPLE are implemented to a large extent. The current configuration of the ZM4 consists of three monitor agents with one MTG and about ten DPU’s which enables us to monitor up to 40 object nodes. As central control and evaluation computer we use a network of several SUN workstations. The implementation of the main tools of the performance evaluation environment SIMPLE like TDL/POET, FILTER, VARUS, CHECKTRACE, LIST, SMART and VISIMON and the integration of tools like S and the modeling tools is completed. We hope to have prototypes of the missing tools ADAR, AICOS and POETCOMP this year.
Page 11
We have now used our equipment over three years in several projects like support of the implementation of a UNIX-multiprocessor-operating system by accompanying measurements [14] or different measurements of parallel programs on the DIRMU multiprocessor [7]. The experiences thereby gained led to extensions and improvements of our monitor system. The performed tests show that it can successfully be used in practice. We believe that the behavioral abstraction approach, and the TDL/POET based tools supporting it, provide a valuable aid to developers and users of parallel and distributed systems. Work is proceeding. Some important areas are the integration of our environment in a programming environment of a multiprocessor, the design and implementation of a more user friendly interface and a better integration of modeling tools. References [1] [2] [3] [4] [5] [6]
[7]
[8] [9] [10] [11] [12] [13] [14]
[15] [16]
P. C. Bates, J. C. Wileden, High-Level Debugging of Distributed Systems: The Behavioral Abstraction Approach, The Journal of Systems and Software,255-264, 1983. R. A. Becker, J. M. Chambers, A.R. Wilks, The New S Language, Wadsworth, 1988. H. Beilner, Workload Characterization and Performance Modelling Tools, Proc. of the Int. Workshop "Workload Characterization of Computer Systems", Pavia, 1985. H. Burkhart, R. Millen, Monitoring Tools in a Multiprocessor Environment, International Conference "Parallel Computing 85", 1986. G. Chiola, A Graphical Petri Net Tool for Performance Analysis, Proc. of the 3rd Int. Workshop on Modeling Techniques and Performance Evaluation, Paris, 1987. D. Haban, D. Wybranietz, Monitoring and Performance Measuring Distributed Systems During Operation, Proceedings of the 1988 ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, Santa Fe, 1988. R. Hofmann, R. Klar, N. Luttenberger, B. Mohr, G. Werner, An Approach to Monitoring and Modeling of Multiprocessor and Multicomputer Systems, Proc. of the Int. Seminar on Performance of Distributed and Parallel Systems, Kyoto, 1988. R. Hofmann, Gesicherte Zeitbezüge beim Monitoring von Multiprozessorsystemen, Proc. of the 11th ITG/GI-Conf. on Architecture of Computing Systems, 1990. M. Kienow, Portierung und Erweiterung eines Graphanalyseprogramms, Studienarbeit, University of Erlangen, 1990. B. Mohr, TDL/POET - Version 5.1, TR 7/89, University of Erlangen, IMMD 7, 1989. N. Luttenberger, Monitoring von Multiprozessor- und Multicomputer-Systemen, PhD thesis, University of Erlangen, 1989. H. Pingel, Stochastische Bewertung serien-paralleler Aufgabenstrukturen, Studienarbeit, University of Erlangen, 1988. D. Potier, M. Veran, QNAP2: A Portable Environment for Queueing Network Modelling, Proc. of the Int. Conf. on Modelling Techniques and Tools, 1984. A. Quick, Synchronisierte Software-Messungen zur Bewertung des dynamischen Verhaltens eines UNIX-Multiprozessor-Betriebssystems, Proc. of the 5th GI/ITGFachtagung MMB ’89. L. Schmickler, Erweiterung des Verfahrens MEDA zur analytischen Beschreibung empirischer Verteilungsfunktionen, Proc. of the 5th GI/ITG-Fachtagung MMB ’89. M. Zieher, M. Zitterbart, A Distributed Performance Evaluation System, International Conference EFOC/LAN 88, Amsterdam, 1988.
Page 12