Using Distributed Active Object Model to Implement TMN ... - CiteSeerX

0 downloads 0 Views 187KB Size Report
The timely constraints imposed to network nodes are also very strong, ... the relation between OSF (Operations Systems Func- ... x class between OSF of two TMNs, or between an ... is the main addition of TMN to OSI Systems Manage- ment.
Using Distributed Active Object Model to Implement TMN Services Kleber Xavier Sampaio de Souza Ivanil Sebasti~ao Bonatti DT/FEE/UNICAMP Caixa Postal 6101 13081-970 Campinas SP Brazil

Abstract

In this paper, the Distributed Active Objects Platform (DAOP) to implement services in Telecommunications Management Network (TMN) is proposed. The main features of this platform are: the rapid exchange of information between co-located active objects due to the sharing of the same address space; its openness due to the usage of ISODE (ISO Development Environment) as its communication infrastructure; the concurrent processing of activities within each active object; and the ability to associate di erent priority levels to such activities. Those features turn it specially suitable for the implementation of TMN services in high speed network environment such as Asynchronous Transfer Mode (ATM).

1 Introduction The advent of the Asynchronous Transfer Mode (ATM), standardized by the ITU-T as a multiplexing and switching technique for B-ISDN networks [8], has brought many challenges regarding their control and management. These networks are being designed to support voice, video and data, whose trac pro le vary widely in their bandwidth requirements [18]. However, the guarantee of a certain QOS, provided by the communications infrastructure, does not assure the desired e ect in the applications environment. It is necessary to strengthen the cooperation among related areas such as network control, network management and applications management. The timely constraints imposed to network nodes are also very strong, viz. the call admission control algorithms have to nd a solution within the call connection inter-arrival time [10, 17]. From the point of view of management, agents responsible for monitoring the state of that node, and reporting on abnormal situations1 are required to react as fast as possible to  Supported by Empresa Brasileira de Pesquisa Agropecu aria

1

Like TMN Q-adaptors or network elements.

some events, while leaving others considered as less important to a later time. That makes the current implementations of agents based on FIFO treatment of enqueued events completely inadequate. Considering the Telecommunications Management Network (TMN) [11], it is divided into functional blocks whose interaction rules are de ned in accordance to reference points2 . In a speci c physical con guration, several functional blocks may be co-located in what is called a building block. This means that functional blocks performing agent function, protocol conversion, proxy or manager functions are in the same machine or even in the same process. In any case, it is desirable3 that one be able to establish which activities are prioritized within each building block, and that they be able to process those activities concurrently4 . The goal of this work is to describe the Distributed Active Objects Platform, an infrastructure based on the concept of multithreaded active objects, whose execution priorities can be de ned by the application's programmer, in accordance to the management function that object is responsible for. By the introduction of multi-threaded address spaces, the DAOP design follows current trends in the area of distributed operating systems such as MACH [1] and CHORUS [6] in which multi-threads are a key concept. In fact, the success of usage of multi-threads in multimedia and real-time operating systems was so great that many other operating systems began to provide them, as is the case of SunOS5 and Solaris [19, 4, 15], and a threads extension for the IEEE POSIX standard was made [16, 5]. The idea behind threads is to provide an abstraction on the concept of ow of control by turning this one into a data type. Each thread has its own program An overview of TMN Architecture is presented in Section 2 The TMN requirements for high-speed networks are studied in Section 3. 4 The term concurrency is used here with the meaning of potential parallelism which is e ectively obtained when more than one processor is available. 5 SunOS is a trademark of SUN Microsystems. 2 3

counter, general registers and stack, making them suitable for concurrent execution within a process. Multithreaded address spaces encompass and enhance the concept of process in operating systems in the sense that they allow for user de nition of the desired degree of concurrency within the same process, while leaving to the operating system the task of allocation of threads to processors. The paper starts with the presentation of the TMN model in Section 2. Next, in Section 3, the requirements of the TMN functional elements to manage high-speed networks are investigated. Taking such requirements as a reference, the use of active objects as an alternative for their satisfaction is examined in Section 4. Then, in Section 5, a supporting platform based on multithreaded active objects is proposed. Finally, that platform is used to implement the Concurrent MIB Server.

2

The Telecommunications Management Network

The Telecommunications Management Network, whose architecture follows the Recommendation ITUT M.3010, is a computer based system designed to support management activities associated with telecommunications networks. It encompasses OSI Systems Management [9] in the sense that the services and protocols used in systems management represent a subset of the management capabilities that can be provided by the TMN and that may be required by an administration [11]. Another important point about TMN is that it foresees interactions with other management architectures through Adaptors, as presented below.

2.1 Functional Model The central point within the TMN architecture is the relation between OSF (Operations Systems Function) blocks and NEF (Network Element Function) blocks, and among OSFs blocks themselves (See Figure 1). The rst one acts as a manager with respect to the second. Whenever that relation is not possible in a direct way, a Mediation Function (MF), or Q-Adaptor is used:  Q-Adaptor Function (QAF): used when the interface presented by an OSF or NEF is not compatible to those ones adopted as TMN standards, and so, a conversion is necessary. As an example, one may cite the interoperation between a OSI manager and a Internet manager, in which case, the

q NEF

3

X

OSF

NE

F

Q3

Q 3

OS

OS

Q3

WS MD QX QA

QX DCN-2

QA NE

DCN-1

Legend:

OSF: OPERATIONS SYSTEM FUNCTION

WS: WORKSTATION FUNCTION

NEF: NETWORK ELEMENT FUNCTION

QA: Q-ADAPTOR

NE: NETWORK ELEMENT OS: OPERATIONS SYSTEM q : REFERENCE POINT Q , X AND F: INTERFACE TYPES 3 3 DCN: DATA COMMUNICATIONS NETWORK

Figure 1: TMN Architecture



adaptor will have to implement both management models, and perform the necessary (or possible) conversions; Mediation Function: ensures that the information passing between OSFs and NEF (or QAF) blocks conforms to the expectations of the those blocks. Di erently from the QAF, the interface between blocks are standard ones, and it enables cost-e ective implementations of the connection of NEFs of di erent complexities to the same OSF. The protocol stacks are not necessarily the same, in which case a convergence stack is used. This case is illustrated by the interfaces Qx and Q3 in Figure 1, where Q3 is the direct interface to OSs. The mediation may consist of [11]: information conversion between information models, e.g. object-oriented; protocol interworking, e.g. between a connection oriented and a connectionless protocols; data handling, e.g. data translation and data formatting; decision making, e.g. thresholding, routing/re-routing data and circuit test analysis; and data storage, e.g. containing network con guration;

To delineate the service boundary between two management function blocks, the concept of Reference Point was introduced. Three classes of reference points were de ned:

q class between OSF, MF, NEF and QAF; f class between OSF/MF and WSF (Workstation Function). The WSF is the user interface to the TMN, but the WSF block is supposed to provide

only the means to interpret management information, and not the interface itself; x class between OSF of two TMNs, or between an OSF and the equivalent OSF-like functionality of another network. This point is particularly important to us, because it is the interaction point between two administrations, like a VASP and a PNO.

PNO BUSINESS OSF VASP SERVICE OSF BUSINESS OSF NETWORK OSF SERVICE OSF

2.2 Information Model In its information model, like OSI Systems Management, TMN uses object-oriented approach, with objects representing an abstract view of the resources, i.e. considering only the information necessary for their management. A resource may be represented by one or several objects, and so, there is not necessarily one-to-one mapping. The Shared Management Knowledge notion provide the manager and the agent with a common view of the information exchanged. That view is established during context negotiation, a process realized in the connection establishment between two application layer entities6 . The concept of Logical Layered Architecture (LLA) is the main addition of TMN to OSI Systems Management. According to the LLA, the architecture can be thought as being based in a series of layers [11], being the scope of each layer broader than the layer below it. As a consequence, instead of remain only with a plane view of the network being managed given by interacting OSF domains, one acquires a spatial view concerning the level of abstraction. In this way, one may de ne OSFs responsible for the network, service and business management, as illustrated in Figure 2. In Figure 2, the Value Added Service Provider (VASP) provides a specialized service, viz. Internet service, over the basic ones provided by a network (public). If the VASP is not also the basic provider, there is an interaction between the Service Layer OSF in both networks either during the service establishment, or in case of eventual problems. The Service Layer is concerned with all service transactions as provision/cessation of a service, accounts, QOS, fault reporting, among others.

2.3 Interactions Model The physical realization of a TMN building block is a composition of one or more of its functional blocks. For example, a Network Element (NE) may be composed of NEF, MF, QAF, OSF and WSF, which 6

In this case, the manager and the agent.

NEF

NEF

Figure 2: Enterprise viewpoint of VASP-PNO interaction means that all the functionalities of those blocks might be displayed in a NE, although the only one mandatory is the NEF. Whenever such a composition takes place, complete interfaces between functional blocks are not obligatory, but the reference point must be preserved. For example, if one composes NEF and MF in an NE, the qx, and not Qx is preserved. The di erence between an interaction point and an interface is that in the later, a channel using one of the standard protocols adopted for that interface, is established to link the concerned building blocks7 . The Figure 1 illustrates a NE composed of the NEF and OSF functions, whose information exchange is through the q3 interaction point. However, its interaction with the external OS is realized through the Q3 interface.

3

Requirements to Manage HighSpeed Networks

Within high-speed networks, as ATM for example, a holistic approach is used to perform ow control. To avoid congestion, a close monitoring of the internal 7 This notion of interface is quite di erent from the usual one adopted in the object modeling which does not require a channel containing standard protocols for an interface to be de ned.

queues of the ATM switches is made, and according to prede ned levels, one tries to control the injection of data at the UNI (User-Network Interface) by warning a NE or Q-Adaptor (QA). It is very likely that such an element be a dedicated one implemented in hardware so as to be able to respond very quickly, before the situation becomes worse. However, in the presence of a situation of congestion control, the manager8 of the a ected node, may decide which applications are the most important ones, and instruct the operating system to prioritize them. Therefore, it is very desirable to have NEs capable of reacting rapidly to some stimuli gathered by other NEs or QAs directly connected to network devices, and so, the performance of NEs is of vital importance. As shown in the previous section, management in the TMN model is performed through the interaction of its constituent building blocks, such as NEs, QAs and OSs. These blocks, in their turn, are composed of functional blocks. Therefore, as the task performed by di erent functional blocks are independent from each other, the maximum degree of performance in such a composition is to provide for each block to execute its activities concurrently with the other ones. In fact, management operations deal with the manipulation of management information, which sometimes take a long time to be carried out (e.g. access to NE information for algorithm processing). During that time, if the building blocks inside the element was not engineered to serve requests concurrently, other services are blocked until the completion of the request being serviced. The performance of a functional block can be improved even further, if one introduces concurrence not only among the blocks, but inside the block itself. This is specially important when the block contains the MIB as one of its components9 . This issue is discussed in the next section.

3.1 MIB Data Classi cation In order to better understand the diculties involved in introducing concurrence in a module which performs operations in a MIB, it is necessary to study the semantics of the data items this MIB is composed of. Network management data can be broadly classi ed into three types: sensor data, structural data and control data. As a matter of fact, according to the OSI management framework, an object may be de ned as Operations Systems (OS) in the TMN model. The MIB is mandatory in blocks containing NEF and OSF with agent role, and optional in other con gurations 8 9

being composed of the three types of elements: for example, an object which represents a circuit may have an attribute VIRTUAL CIRCUIT NUMBER (structural data), an action ACTIVATE (control data), and a noti cation CIRCUIT DISCONNECT (sensor data). However, this is a logical view of the object, and not its realization, which is the aspect we are concerned about.  Sensor Data: is the data that is received from the monitoring processes. They can be sporadic, like a fault noti cation, or periodic, like the result of a polling operation performed on some device to evaluate its performance;  Control Data: may be the result of an action taken by the network manager or by an automated process after analyzing the sensor information. For example, if a hacker is detected at a certain node, an action may have been programmed to cut the communication links towards that node, or to shut the node down;  Structural Data: is composed of information which change very rarely, like the capacity of a given link, its signal/noise ratio, and its state (active/inactive). From this classi cation, it is possible to see that not only concurrence among activities is important, but also that there exist di erent degrees of priority in those activities. For example, a request concerning the update of periodic sensor data may be given lower priority than another one which reports on a fault.

3.2 General Requirements From the preceding analysis, the requirements to be satis ed by a platform destined to manage high-speed networks in accordance to the TMN model are:  Concurrent processing of management activities instead of their serialization;  Ability to establish di erent priority levels to activities being processed concurrently;  Provide standard protocols infrastructure, for that is the means of interaction between TMN building blocks.

4 Objects and Concurrence This section presents the model of active objects and shows that they satisfy the concurrency requirements established in the preceding section.

An object is an entity which hold an internal state, accessed by a set of operations (methods). These operations are grouped in accordance to a functional classi cation to form a service interface, viz. the management and non-management interfaces of an object. Objects also have properties like encapsulation, abstraction and exhibit a behavior [13]: Encapsulation is the property which ensures that the object's internal state is accessible only though interactions performed in one of its interfaces. That means an operation in one object cannot a ect the state of another. Abstraction implies that the internal implementation details of an object are hidden from other objects. Behaviour is the set of potential actions that can be observed at all of its interfaces; An Active Object is a generalization of the concept of object present in many object-oriented programming languages [3]. Regarding the execution control, the common scenario is to nd languages (C++, Objective-C) whose object model is passive, i.e. the object is activated when it receives a message from another object. As there is just one thread of control, the sender becomes passive while the receiver is processing the message, being activated only after the result. To obtain parallelism any of the following ways [2] can be adopted: 1. Allow an object to be active without having received a message; 2. Allow the receiving object to continue its execution after it returns its result; 3. Send messages to several objects at once; 4. Allow the sender of a message to proceed in parallel with the receiver. Whichever alternative is chosen, the central point is to provide an object with a set of threads of control, so that it be able to allocate them to its methods. In particular, the usage of multi-threads in conjunction with an infrastructure performing time sharing10 in the allocation of processors to threads is enough to guarantee that the alternative (1) above is satis ed, because that object is e ectively active during its time slice. The other requirements are satis ed by the distribution of threads into several priority levels and the usage of the ISODE package, as commented in the next section. 10

The scheduler of the platform's kernel performs this task.

PROCESS

ACTIVE OBJECT N

ACTIVE OBJECT 1

SCHEDULER

LOCK MANAGER

LWP

TRHEADS MANAGER

LIBRARY

PROTOCOL

ISODE

SUN-OS

Figure 3: Platform architecture

5 Supporting Platform The DAOP was developed by the authors partly in both the PRi SM Laboratory of University of Versailles Saint-Quentin, in France, and in the Laboratory of Telematics of the State University of Campinas (UNICAMP) in Brazil. The DAOP was implemented in C++ [20] , under SunOS-4.1, and in its communication infrastructure it was used the ISODE package. The utilization of the ISODE environment has the advantage of allowing the platform to operate both over OSI or Internet suite of protocols, satisfying the requirement (3) of Section 3. The Lightweight Processes Library (LWP) was used to provide the C-threads facilities, which were restructured to be used as C++ classes. By doing so, the threads themselves were turned into objects, protecting their internal state from being modi ed inadvertently by other threads. For a better understanding of the platform, some de nitions are necessary (see Figure 3):

Process: is a basic unit of resource allocation which

includes a page address space and protected access to system resources such as le descriptors, processors and ports; Co-located Active Objects: are objects which are in the same process, and so, share the platform kernel and protocol objects; Kernel: is the name given to the composition of a scheduler, a lock manager and a threads manager. It is just de ned to separate the part with deals with the LWP library from the one interacting with the ISODE package; Scheduler: is the platform component responsible for the reshue of the queues corresponding to the several levels of priority assigned to threads in activity. It is, itself, at the highest priority level, and

whenever informed by the Lock Manager, manages a priority inversion which is occurring, setting temporarily the priority of the thread holding the lock to that of the thread which is waiting;

SERVER

CLIENT

C++ OBJECT

C++ OBJECT

STATE

STATE

Lock Manager: is a platform component whose

function is to provide the applications implementor with a means to synchronize access to shared data within a process. Each process has its own Lock Manager. If a priority inversion is detected, the Lock Manager sends a message to the Scheduler, who will take the necessary measures to revert the situation. Is is not necessary to have a centralized manager for locks, since the information model states that information is spread over all components. Each part of the MIB is hold by just one function block, like a OSF, WSF, NEF, MF or QAF;

SHARED STATE

Figure 4: Active object model 

Threads Manager: is the interface between the



Protocol Objects: is the platform component which



platform and the LWP library. It works in cooperation with the scheduler to keep it up to date with respect to threads creation and destruction, and provides an interface for communications between threads inside that process. controls the active associations and have a table relating active objects to corresponding associations.

The platform exhibits the following features: 

Multiple threads of control can be created under request to satisfy the needs of the object, transforming it in an intelligent Server, because it is able to create and destroy threads depending on the number of simultaneous requests. One may notice that this is more than what is required to classify an object as active, i.e. just one independent thread per server is enough;



Priority levels may be assigned to threads in their creation and modi ed later case necessary. This assignment guarantees the satisfaction of the requirement (2) of Section 3;



All threads within the same object share its private state [20] (see Figure 4). Therefore, the applications implementor must be careful to use either monitors or the lock manager to serialize access to that state. The use of the lock manager is advantageous because it takes care of priority inversion;



6

As it provides for the implementation of co-located active objects, it is possible to implement two OSFs sharing the same protocol object11 . This reduces signi cantly the size of the executable code of the resulting application, because normally one would have to insert protocol objects into both OSFs; The communication between co-located TMN functional blocks is very fast because it is only a message passing between threads. This avoids the unnecessary load of the seven layer stack12, which would be the case if the functional blocks were in di erent processes; Access Transparency [12, 14] is obtained through the use of a stub compiler. This compiler is a front-end to the ISODE package compilers pepy and posy which generate C language stubs from ASN.1 [7] speci cations. It generates object classes over the C data structures generated by posy, and takes advantage of the concepts of abstract classes and operator overloading existing in C++ to call automatically the encoding and decoding functions generated by pepy. The stub compiler is part of the platform environment and is also a tool implemented by the authors; Location Transparency of TMN building blocks is achieved by the use of the directory service quipu which is part of the ISODE package.

Platform Test: Concurrent MIB Server Implementation

In the Section 3, two possible levels of concurrence were identi ed: between functional blocks composing a building block and inside the functional block itself. 11 Avoiding the necessity of implementing the protocol stack twice. 12 In the case of OSI communications.

The Concurrent MIB server falls in the second category. It was implemented as a set of active objects whose threads have priorities distributed taking into account the MIB data classi cation also given in that section:

Sporadic Sensor Data: For the waiting of these

data were assigned threads with the second highest priority level. However, their activity is monitored by the scheduler to avoid monopolization of the processing time in case of a burst arrival due to a serious problem. If no action was taken by the scheduler, even the management activity could be stopped, which is unacceptable. The solution adopted was to lower the priority of that thread to the level of that of the normal management requests during a burst period;

Periodic Sensor Data: to the entity responsible for polling were assigned threads with the third highest level. As they consume an amount of processing band known in advance, no further action is need to constrain their activity;

Management Requests: these requests act either

over control data or structural data. As there is no di erentiation of these data from the point of view of OSI management primitives, i.e. an M SET and M ACTION can be directed to both, its priority level is raised to that of the sporadic sensor data if the primitive is M ACTION, and left unaltered otherwise. It was done this way, because it is likely that an action be urgent, as in the case of shutting a link down when it is invaded by a hacker. This raise of priority is just during the execution of the action, however.

As the interaction with the MIB is through the

Management Information Service (MIS), it was used

the MIS implementation called OSIMIS. That package was developed at University College of London as a set of functionalities to build agents and managers. To illustrate the way the Concurrent MIB Server works, observe the Figure 5. The situation displayed shows only one of the many kinds of possible behaviors: the treatment of a request which does not involve any interaction external to the MIB concerned. The Coordinator is the active object responsible for the reception of requests, control of associations13 and listening to sockets.

13 Associations are OSI connections between two entities in the application layer.

MIB (serialized access) 5/6

4 A 3

LOCK

7/8 9

11 10

12

B

CMIS Agent

2

1 Coordinator

Legend:

Control Activity sequence

thread

Figure 5: CMIS request with local treatment The CMIS Agent is the entity which performs the work in the MIB. The degree of concurrency chosen was to treat in parallel the requests proceeding from di erent associations, and serialize them otherwise. The reasoning behind this is that it is very likely that requests proceeding from the same association be intended to be treated as a set of activities to be executed sequentially. Therefore, the CMIS Agent creates a new thread to each new association, but the control of that thread remain with the coordinator as illustrated by dashed lines in the gure. The treatment of a CMIS request is as follows: (1-2) The request arrives to the coordinator which was listening in a socket. As it realised the message was intended for the CMIS agent, it forwards that request by issuing a message instructing that agent to read its communication end-point; (3-4) The agent thread responsible for that association handles the request; (5) Let us suppose, for instance, that the operation say a Set implies in obtaining a lock to access its target object A. The lock is then requested; (6) The lock is obtained and the thread enters the protected region inside object A; (7-8) During the processing inside the critical region, let us suppose that the object is composed (Containment Hierarchy) of object B which has also to be modi ed. Then, another lock is requested and

obtained. Otherwise, the requesting object would remain waiting in blocked state for that lock; (9-10) The state of B is changed and the thread continues its execution in A; (11-12) The thread goes back to CMIS Agent object and continues its execution up to the sending of the response back to the requesting manager.

6.1 Performance Tests Worst-case tests were performed in order to evaluate the overhead introduced by the platform. This worst case is represented by such a low event rate that the existence of concurrence and several priority levels is totally irrelevant, because an event is treated before a new one arrives. For the comparison, a monothreaded version of the same NE without priorities, but also using C++ and ISODE, was used. Those tests showed that the mean response time is almost the same when compared to those displayed by the mono-threaded version. Consequently, the overhead introduced by the control structure inside the platform is minimal. The results are shown in Table 6.1. The quantities displayed are the total elapsed time from the moment the request was made until the reception and display of the response, measured with the SunOS time command. They are dependent, of course, of the local network load, but the tests were performed late at night, when network trac is lower, and both agents were tested under the same conditions, at almost the same time. That means they are valuable as a means of comparison, not as an absolute value. Typical commands directed to the platform were:  time maction SMA treville -c uxObj1 -i

;

uxObj1=test -a getUserNames

 time mibdump -c transportEntity -i

;

subSystemId=transport@entityId=isode

 time mset SMA treville -c uxObj1 -i

. Also, as one wanted to tune the scheduler, tests have been run for several values of time between rescheduling (TBR), from which displayed two values are displayed: 50ms and 10ms. uxObjId=test -w wiseSaying="..."

7 Concluding Remarks Multithreaded active objects, with threads distributed in several priority levels and lock control service, are useful when implementing TMN building

blocks, because each functional block included in the former can be implemented as an active object. The Distributed Active Objects Platform was implemented in C++, under SunOS-4.1, and in its communications infrastructure it was used the ISODE package, making it suitable for the implementation of management applications both in OSI and Internet frameworks. The Lightweight Processes Library was used to provide C-threads facilities, which were restructured to be used as C++ classes. Computational tests showed that the overhead introduced by the platform is minimal. So, it can be used even in a disfavoring situation such as low event arrival rate. As far as we know, this is the only platform in which concurrent multiprocessing with several priority levels is introduced in a procedural language such as C++, and whose communication is realized by the ISODE package.

References [1] M. J. Accetta, W. Baron, R. V. Bolosky, D. B. Golub, R. F. Rashid, A. Tevanian, and M. W. Young. Mach: A new kernel foundation for UNIX development. In Proceedings of the Summer USENIX Conference, 1986. [2] H. E. Bal, J. G. Steiner, and A. S. Tanenbaum. Programming languages for distributed computing systems. ACM Computing Surveys, 21(3):261{322, 1989. [3] J.S. Sichman E. Cardozo and Y. Damazeau. Using active object model to implement multi-agent systems. In Proceedings of the 5th. International Confefence on Tools with Arti cial Intelligence, pages 70{77, 1993. [4] J. R. Eykholt, S. R. Kleiman, S. Barton, R. Faulkner, A. Shivalingiah, M. Smith, D. Stein, J. Voll, M. Weeks, and D. Williams. Beyond multiprocessing... multithreading the SunOS kernel. In Proceedings of Summer USENIX Conference, 1992. [5] B. O. Gallmeister and C. Lanier. Early experience with POSIX 1003.4 and POSIX 1003.4a. In Proceedings of the IEEE Symposium on Real-time Systems, pages 190{198, 1991.

[6] F. Herrmann, F. Armand, M. Rozier, M. Gien, V. Abossimov, I. Boule, M. Guillemont,

Get Set Monothreaded Multithreaded Monothreaded Multithreaded 10 ms 50 ms 10 ms 50 ms Get 2.085 1.948 2.043 1.780 1.951 1.759 Action 1.950 1.912 2.050 1.737. 1.810 1.870 Table 1: Comparison between the classical and the multithreaded implementations. (Measured in seconds) P. Leonard S. Langlois, and W. Neuhauser. CHORUS: A new technology for building UNIX systems. In Proceedings of EEUG Autumn Conference, 1988. [7] ISO. ISO/IS 8824 - Speci cation of Abstract Syntax Notation One (ASN.1), 1987. [8] ITU. ITU-T Recommendation I.311 - B-ISDN General Network Aspects, 1992. [9] ITU. ITU-T Recommendation X.700 - Manage-

ment Framework De nition for Open Systems Interconnection, 1992.

[10] ITU. ITU-T Recommendation I.371 - Trac Control and Congestion Control in B-ISDN, 1993. [11] ITU and ISO/IEC. ITU-T Recommandation

M.3010 - ODP-RM Principles pour un Reseau de Gestion de Telecommunications, 1992.

[12] ITU,ISO/IEC. ITU-T Draft Recommendation X.901 - ODP-RM Overview, 1994. [13] ITU,ISO/IEC. ITU-T Draft Recommendation X.902 - ODP-RM Descriptive Model, 1994. [14] ITU,ISO/IEC. ITU-T Draft Recommendation X.904 - ODP-RM Architectural Semantics, 1994. [15] S. Khanna, M. Sebree, and J. Zolnowsky. Realtime scheduling in SunOS 5.0. In Proceedings of Winter USENIX Conference, 1992. [16] F. Muller. A library implementation of POSIX threads under UNIX. In Proceedings of USENIX Conference, pages 29{41, 1993. [17] S. Rampal and D. S. Reeves. An evaluation of routing and control algorithms for real-time traf c in packet-switched networks. In Proceedings

of IFIP Conference on High-speed Performance Networking (HPN'94), pages 77{92, 1994.

[18] S. Rampal, D. S. Reeves, I. Viniotis, and D. P. Agrawal. An approach towards end-to-end QoS with statistical multiplexing in ATM networks. Technical Report Center of Communications and Signal Processing TR 95/2, North Carolina State University, 1995. [19] D. Stein and D. Shah. Implementing lightweight threads. In Proceedings of Summer USENIX Conference, 1992. [20] B. Stroustrup. The C++ Programming Language. Addison-Wesley, 1991.

Suggest Documents