1. Object-Oriented Modeling of the Architecture of Integrated Networks

2 downloads 0 Views 84KB Size Report
Jul 1, 1992 - applied to an arriving packet at a switching node, channel or sink. Information flows ... used to establish a connection between two or multiple users. .... Integrated Network. Traffic Control. Architecture. Controls. Figure 3. .... lection of OSI layer entities can be regarded as a combination of the (C)- and.
Page

1.

1

Object-Oriented Modeling of the Architecture of Integrated Networks Aurel A. Lazar Department of Electrical Engineering and Center for Telecommunications Research Columbia University, New York, NY 10027-6699 e-mail: [email protected]

Abstract A unified framework of an architecture for integrated networks is outlined. The fundamental objects of this architecture are modeled as information transport entities, network entities and operators. The network architecture is based on three fundamental principles: the Separation Principle, the Principle of Communication Transparency and the Principle of Asynchronous Resource Management. Based on these three principles the network objects are organized into an Integrated Reference Model. Augmented by performance modeling of four traffic classes and the corresponding performance measures, these principles define the structure of the network control and management architecture.

Key words and phrases:

asynchronous algorithms, entities, network architecture, traffic control architecture, objects, quality of service, reference model, performance modeling.

CTR Technical Report # 167-90-04, Center for Telecommunications Research, Columbia University, New York, NY 10027-6699, January 1990.

July 1, 1992

Page

Introduction 1.1 Network Objects and Operators 1.1.1 1.1.2

Information Transport and Network Entities Operators

1.2 Three Fundamental Principles 1.2.1 1.2.2 1.2.3

The User Transport Plane The Connection Management and Control Plane The Resource Monitoring and Management Plane The Resource Management and Control Plane

1.4 Performance Modeling 1.4.1 1.4.2 1.4.3 1.4.4 1.4.5

3 4 4 6

7

The Separation Principle 7 The Principle of Communication Transparency 9 The Principle of Asynchronous Resource Management 11

1.3 The Integrated Reference Model 1.3.1 1.3.2 1.3.3 1.3.4

2

Queueing Model Traffic Classes, Traffic Types and Priority Classes Network and User Performance Information Structures Network Control Parameters

14 16 17 17 19

20 20 22 24 25 26

Conclusions

30

References

32

July 1, 1992

Page

3

Introduction The goal of this paper is to present a unified framework for and to further our understanding of the behavior, design and implementation issues of integrated networks both from a logical as well as from the performance point of view. Three fundamental principles are identified. These are: the Separation Principle, the Principle of Communication Transparency and the Principle of Asynchronous Resource Management. Augmented by performance modeling that includes four traffic classes and the corresponding performance measures, these principles define an Integrated Reference Model for network architectures. The objective of the Integrated Reference Model is to provide a model for the organization of: • information transport entities, • network entities, and • operators on such entities. The organization requires the execution of the following tasks: • • • •

resource management and control, connection management and control, resource monitoring and management, and user information transport.

Resource management and control tasks support the resource sharing mechanisms of the network as well as the user resources. Resource monitoring and management is the task of providing information about the real-time status of the network. Connection management and control operators transport signaling information to the users (end-to-end) as well control information into and within the network. User information flows and network entities support the actual information transfer. The resource management and control tasks are embedded into the Traffic Control Architecture. They are performance driven. Given the configuration of a network, this architecture is based on the interplay between four classes of algorithms: scheduling and buffer management, routing, flow control and admission control. Observations and their abstractions are obtained through monitoring and evaluation of network behavior as a function of the traffic load profile and quality of service requirements. The knowledge about the network and its status is stored in a Knowledge Database that is accessible by both the

July 1, 1992

Page

4

resource management and control as wel as by connection management and control tasks. It supports the Traffic Control Architecture as well as the Fault Management Architecture. From a performance standpoint, our fundamental assumption is that the main network resources, switching and communication bandwidth, buffer space and processing are both observable and controllable. The model of queueing behavior proposed in this paper explicitly takes into account the following four traffic classes: Class C (control and management), Class I (zero percent packet loss; no retransmission), Class II ( ε percent packet loss, average of the consecutively lost packets, η ; no retransmission), and Class III (zero percent packet loss; retransmissions). Therefore, objectively quantifiable measures are associated with the quality of service requirements of the end user traffic. The user can map his/her application among the latter three traffic classes. This paper is organized as follows. In section 1.1 network objects and operators in telecommunication networks are introduced. Three fundamental principles for defining a network architecture are presented in section 1.2. Section 1.3 presents the Integrated Reference Model. Performance modeling is discussed in section 1.4.

1.1

Network Objects and Operators The most fundamental task in communications is information transport. It consists of transferring a “bag of information” between two arbitrary points, say A and B. The next level of communication complexity is achieved with the model of a duplex connection that exhibits feedback between the transmission path from point A to B and the transmission path from point B to A. How is this information represented? What are the basic entities that support the information transport? What are the basic operations to be performed? These are the issues briefly discussed in the sections below.

1.1.1

Information Transport and Network Entities In what follows we define a set of entities (or objects [ELM89]) that represent information that needs to be transported and a set of entities that take part in the task of information transfer. Each basic information entity is characterized by a set of attributes called a mark and a generation time attribute. The fundamental information transport entity (or atom) is called a packet. Packets have a length that is measured in bits. The length might be fixed or variable. Packets are also characterized by: user tag, traffic class, traffic type

July 1, 1992

Page

5

and priority class. These attributes are called the mark of the packet. The user tag specifies the source-destination pair. The traffic class, type and priority class will be discussed in section 1.4. Packets ordered into a stream according to their generation time form an information flow. Information flows are characterized by packet generation times, ( τn) , as well as by the mark of the packets, ( k n ) n ∈ N , (see Figure 1). n∈N

The fundamental level of abstraction is, therefore, called a “marked point process” [BRE81], [BAC87]. Information flows are generated by traffic sources. They are consumed by traffic sinks. Sources and sinks are also called users.

κ1

κ2

τ1

κ3

τ2

τ3

κ4

τ4

κn

τn

Time

Figure 1. The Marked Point Process Model for Information Flows

The class of network entities that support the fundamental telecommunications task are switches, channels, buffers and processors. Switching bandwidth, communication bandwidth, buffer space and processing are attributes of these entities, respectively. They are also known as resources. A set of relationships (or associations) is defined among the network entities. In a multiple user environment, the information transport task can be achieved by employing a range of architectures. A system architecture is a structured choice of network entities and their relationships that supports the successful execution of the information transfer task. Configuration is one of its attributes. The most obvious choice of a system architecture is to have broadcast interconnection between a user and every other user. Here, broadcast interconnection is a relationship between the user (an entity) with other users (entities). Such an architecture would exhibit a tremendous complexity and would be in the general distributed setting practically infeasible. A feasible solution is to build a network, consisting of switching nodes and communication links, that routes the information to the end-users.

July 1, 1992

Page

1.1.2

6

Operators Operators act on the system architecture or on information flows. The former are used to define the configuration of the network. The latter are employed to control congestion in the network. A set of operators supporting a specific information transmission task is called an algorithm. The configuration control operator defines the configuration of the system architecture. Through a set of entities and relationships it prescribes the system architecture of the network. It is expected that the emerging lightwave networks will have the capability of real-time reconfiguration [LAB90]. In packet switched networks the address recognition operator plays a fundamental role. Address recognition is used to identify the mark of a packet. It is applied to an arriving packet at a switching node, channel or sink. Information flows typically contain packets that are characterized by the same attributes (marks). There are several operators that act on information flows. These are: the multiplexing operator, the demultiplexing operator and the delay operator. The latter can introduce a delay between different traffic streams. The existence of an information flow can be negotiated in networks. The negotiation operator is called connection setup. Connection setup is the operator used to establish a connection between two or multiple users. A connection between a user and all other users is called a broadcast. When only a subset of all the users is considered, the connection is called a multicast. The following operators are also defined: scheduling and buffer management, routing, flow control and admission control. These operators allow various manipulations of the information streams. The scheduling operator reorders packets belonging to different information flows at a switching point in the system.This typically involves a destination address, traffic class, or priority class recognition for serving packets. The buffer management operator accepts or rejects packets into the system. Packets that are not permitted to enter a buffer management system, for example, are said to be blocked. The associated operator is called the blocking operator. Note that scheduling and buffer management are operators that act on individual packets. Routing operations in the network are designed based upon two of the fundamental operators mentioned before: the multiplexing and the demultiplexing operators. While the routing operator might act at a switching node on each in-

July 1, 1992

Page

7

dividual packet separately, its distributed network version acts on information flows networkwide. The latter consists of a routing operator composed of the set of individual routing operators in the network. Packets that belong to the same information flow might be routed differently. In this case a reordering of packets at the sink is necessary to regain all packets in the order they were originally generated at the source. Flow control is the operator that regulates the traffic flow of any of the traffic sources. The flow control operator acts on a single information flow and regulates the amount of traffic flow a traffic source might send into the network. Admission Control is the operator that allows or blocks a new information flow to join the network. This operator acts upon traffic flows.It is different from the blocking operator in buffer management since it is defined to operate on information flows. Recall that the blocking operator defined above acted on individual packets.

1.2

Three Fundamental Principles In what follows, we introduce a set of principles that guide the organization of the information transport and network entities, and their relationships into a network architecture. These principles are: the Separation Principle, the Principle of Communication Transparency and the Principle of Asynchronous Resource Management.

1.2.1

The Separation Principle The Separation Principle consists of two parts. The first one states that, the tasks of Communications and Controls are logically separated into two planes. The Control plane controls the activities of the Communication plane by a set of control strategies that are based on a set of measurements (see Figure 2 and 3). The second part of the Separation Principle states that the Control plane is logicaly separated into three planes, each modeling three different tasks. These are: resource management and control, resource monitoring and management and connection management and control. The major requirement on the resource management and control tasks is to guarantee quality of service [LAZ90a]. This requirement is achieved through a set of algorithms that allocate the resources of the network. There are five classes of algorithms. These are characterized by: configuration control,

July 1, 1992

Page

Traffic Control Architecture

Measurements

Integrated Network

Figure 2. The Logical Separation between Communications and Controls (Measurement Function)

July 1, 1992

8

Page

9

scheduling and buffer management, routing, flow control and admission control. The resource monitoring and management task consists of representing the knowledge about the network objects, setting up sensors for monitoring, and abstracting the obtained data. The abstracted data about the information transport and network entities, and their relationships is presented for network control as well as in response to queries. The connection management and control task consists of setting up new connections, renegotiating exisitng ones and finally disconnecting them. This task is closely related to resource management and control. Since the network is a distributed entity, the detection of the existence of lightly used network resources (such as switching bandwidth or buffer space) would typically require a distributed search. Our definition of the Separation Principle is based on an abstraction of the network architecture of MAGNET I [LAZ85] and the structure of the IRM proposed in [LAZ86] as well as the requirements on the architecture of the ISDN and CCSS #7 [CCI85a-d]. See also [FES89]. .

1.2.2

The Principle of Communication Transparency The usual description of the transfer of information from a source to a sink is given in terms of entities such as links (channels), routes, connections and sessions. A session consists of a set of connections which in turn consists of a set of routes made up of point to point links. The task of information transfer requires error free recovery of information on all these levels of information transmission Hence, the task of information transfer between peer entities (objects) is a process that can be layered, i.e., broken down into subprocesses [HAL88]. Each layer performs a well-defined function in the context of the overall communication strategy. The function of each layer is defined by the services it provides to the layer above, its own operation (or protocol) and the services it requires from the layer below (see Figure 4). For convenience, we adopt the following notations. An ‘“(N)-entity” refers to an entity existing in layer N. An (N)-entity may provide (N)-services through several (N)-SAPs, and utilize the (N-1)-services provided by one or more (N-1)SAPs. The set of rules and formats which govern the information flows among peer entities is specified as a protocol (see Figure 4). The Principle of Communication Transparency specifies that the knowledge of the services the layer is to provide to the layer above, the internal protocol of the layer, and the services that are provided by the layer below are only re-

July 1, 1992

Page

10

Traffic Control Architecture

Controls

Integrated Network

Figure 3. The Logical Separation between Communications and Controls (Control Function) July 1, 1992

Page

11

quired to communicate through units of information with the corresponding protocol of the remote system. This principle was in essence first defined by the ISO in the OSI standard for open systems interconnection [HAL88].

Layer N+1

Service User

Service User

N-services

User Services

User Services

Protocol Entity

Layer N

(N-1)-services

Used Services

PDUs

Protocol Entity

Used Services

Figure 4. Protocol Layer Model

1.2.3

The Principle of Asynchronous Resource Management The Principle of Asynchronous Resource Management pertains to the model-

July 1, 1992

Page

12

ing of the modus operandi of the resource management and control tasks that support the process of information transfer. It is necessitated by, and it is a direct reflection of the fundamental time delays (constraints) that exist between objects in distributed systems. There are five classes of algorithms whose performance affects the efficiency of the information transfer task. These are configuration control, scheduling and buffer management, routing, flow control and admission control. The Principle of Asynchronous Resource Management and Control applies to the latter four. Thus, it is assumed here that the configuration of the network is given. The scheduling and buffer management, routing, flow control and admission λ K . Their dicontrol parameters are vectors that will be denoted by µ, r, and mensionality will be defined in section 1.4. The basic control problem resolved in the Resource Management and Control Plane is the efficient allocation of network resources. This can be stated in the general form as max { U, C } = max max max max K λ r µ K λ r

µ

( U, C )

,

(EQ 1.1)

where, U and C is a set of utility functions and (negative) costs. The average throughput and the percentage of lost cells are examples of utility functions and costs, respectively. See section 1.4 for more details. Equation (1.1) is purely formal. This is because, the sense in which the maximum on the left hand side of the equation (1.1) above is defined has not been explicitly specified. (In practice this is left to the network designer.) As an example, the maximum on the left hand side of equation (1.1) can be defined in the sense of the Nash equilibrium. In [HSI87a], [HSI87b] this approach was limited to finding flow control policies. This problem setting can also be extended to all the control strategies residing in the Resource Management and Control Plane [BOV88]. It typically results in finding a fixed point. The right hand side of equation (1.1) suggests that the maximum on the left hand side (if it is well defined and exists) will be found by a stepwise optimization procedure along four classes of parameters. This iterative procedure, however, is not the only possible solution for finding the optimal operating point. Other solutions are based on finding the control parameters of the different classes of control algorithms asynchronously. They require an asynchronous exchange of information between these classes. Gauss-Seidel, Jacobi and other iterative methods [BER89] have been considered in the literature of distributed and parallel computation. The following example should further clarify these ideas (see Figure 5). On the vertical axis we depicted the parameters of the four classes of algorithms. The

July 1, 1992

Page

13

K

λ

r µ t0

t1

t2

Time

Figure 5. Asynchronous Algorithms

four algorithms are repeatedly run during the lifetime of the network. The arrows show the time of completion of each of the algorithms. This is not the time, however, when the information about the corresponding control parameters is available throughout the network. As an example, at time t 1 the most recent information about the routing parameters is available throughout the network. Any of the four distributed algorithms that start after this time will use this information. The routing algorithm that starts at time t 2 will use current information about the scheduling and buffer management, routing and admission control parameters. It will use, however, outdated information about the flow control parameters. The Principle of Asynchronous Resource Management states that each of the network control parameters, scheduling and buffer management, routing, flow control and admission control, participate in a distributed control task. The network operating point is achieved via an asynchronous algorithm among these four classes of algorithms.

July 1, 1992

Page

14

The Principle of Asynchronous Resource Sharing does not require the implementation of a specific class of asynchronous algorithms. This is left to the network designer. It requires, however, that information about control parameters is exchanged. Based on this exchange, the four sets of control parameters might converge to a set of fixed points. No guarantee of unicity of the operating point is given (or required).

1.3

The Integrated Reference Model As already mentioned, the information transport and network entities are organized into a network architecture. The Integrated Reference Model (IRM) presented below gives an outline for this organization. The IRM is logically divided into four vertical planes and a set of horizontal layers and modules (see Figure 6). The purpose of this division is to facilitate the identification of the main issues when implementing the integrated network architecture. The vertical subdivision corresponds to the main control and communications tasks mentioned in section 1.2.1. These tasks are originated, respectively, in the user transport (U)-plane, connection management and control (C)-, resource monitoring and management (D)- and resource management and control (M)-planes. A plane is characterized by a set of entities and their relationships. The (U)plane models the user transport of information. All entities and algorithms that support or ar part of information transport are organized in this plane. The (C)plane contains the entities and algorithms responsible for connection management and control. The (D)-plane contains the entities and algorithms for monitoring and management. (The data about the network is storred in a Knowledge Database.) The (M)-plane has the entities and algorithms responsible for resource management and control. The (U)-and (C)-planes are horizontally layered. The horizontal subdivision corresponds to the layering concept described in section 1.2.2 and introduced by the OSI RM [HAL88]. Recursive application of the OSI Service Model consisting of a service provider and multiple service users is the basis for layering the IRM. The (D)- and (M)-planes consist of a number of objects or modules. For convenience, we adopt the notation of section 1.2.2. As before, an “(N)entity” refers to an entity existing in layer N. Further, if the entity resides within a particular plane, say the (C)-plane, the notation “(N, C)-entity” is used. Whenever defined, similar representations apply for services and SAPs. The ISDN Reference Model [CCI85d], defines two distinct protocol planes, the user plane (U) and the control plane (C). There exists a functional similarity be-

July 1, 1992

MAP Management Applicaton Process

July 1, 1992

M

Resource Control Database

Resource Control Algorithm

D

C

U

MAP Statistical Database Admission Control Flow Control Routing Scheduling& Buffer Mgt. Config. Control Resource Monitoring and Management

Connection User Management Information and Control Transport

15

Figure 6 . The Integrated Reference Model

Page

Resource Management and Control

Page

16

tween these planes and the IRM (C)- and (U)-planes. Note that, the functions of the (M)- and (D)-planes in the IRM are incorporated in the ISDN (C) plane. A direct correspondence, however, does not exist. The interconnection of the architecture of an integrated network (IN) to an ISDN, however, is facilitated by the concept of planes. In the case of interconnecting an IN to an OSI network, the IRM (M)- and (D)planes correspond to the OSI layer management. On the other hand, the collection of OSI layer entities can be regarded as a combination of the (C)- and (U)-planes.

1.3.1

User Information Transport - The U-Plane The (U)-plane of the IRM allows integrated protocols to have the following capabilities: • Connection oriented services as well as connectionless (datagram) services; • Symmetric as well as asymmetric traffic flows (One important case is the simplex or unidirectional connection); • Multipoint as well as point-to-point communications (One example of multipoint communications is a broadcast datagram service for administrative purposes; another is a multicasting virtual circuit service for conference calling). (N, U)-services are characterized as services which are concerned with the actual transfer of user information. The (N, U)-entities are, therefore, the abstract functional groups used to transfer user information. In this context, the user refers to not only the ultimate application process, but also the higher layers in general. The concept of quality of service has a significant importance in integrated protocols. Isochronous and non-isochronous traffic flows are handled according to the desired performance characteristics. Thus, the protocol implements a service according to the guaranteed quality of service offered to Class I, II or III traffic (see section 1.4). In the OSI RM, entities and SAPs are not differentiated. Therefore, control PDUs and information PDUs regarding the same connection may be transmitted by the same SAP even if the required quality of service for each type of transmission is different. In the case of the IRM, however, they must use different SAPs, and thus are required to utilize the network resources more precisely.

July 1, 1992

Page

1.3.2

17

Connection Management and Control - The C-Plane Connection management and control tasks establish a logical connection, renegotiation and disconnection between two users located in remote systems. (N)-connections, defined as temporary relationships among peer (N, U)-SAPs, are established for the transfer of connection oriented user information flows using (N, C)-services. Services are provided for the establishment, renegotiation and disconnection of (N)-connections. Performance criteria as defined in section 1.4 can be established on a per connection basis to achieve a desired quality of service. Renegotiation provides for dynamic adjustment of connection attributes. The (N, U)- and (N, C)-SAPs provide a means for transferring signaling information flows separately from user information flows. The (C)-plane utilizes information obtained from the (D)-plane in maintaining (N)-connections.

1.3.3

Resource Monitoring and Management - The D-Plane The resource monitoring and management activities of the network architecture are modeled as a separate (D)-plane. (D)-plane entities provide data models and information about the history of the behavior of the network. The (D)-plane is organized as an object-oriented distributed database [BAN88]. A view of this database is shown in Figure 7 [MAZ89], [MAZ90a], [MAZ90b]. The system level knowledge about the network is represented in a Configuration Database. This database contains knowledge about the network entities and their relationships. The knowledge about the sensors and their relationships with the network objects is stored in the Sensor Database. In the Dynamic Database state and event variables are represented. Finally, the Statistical Database represents the statistical information about the network. The Statistical Database is a high level abstraction of the Knowledge Database. It is updated based on the information collected by the sensors and processed for data abstraction. Thus, the Statistical Database (shown in Figure 6 and 7) is the dataspace containing the performance parameters of the objects of interest. These performance parameters are associated with the corresponding state and event variables that describe the behavior of the network. (D)-entities provide information for connection establishment, resource alloca-

July 1, 1992

Page

Throughput

Statistical Database

Call_Blocked

Packet_Loss

Time_Delay

Abstraction of State and Event Variables

State_Variable

Dynamic Database

Event_Variable

Sensor Setup and Data Collection

Station_Info Buffer

Source

Traffic_Source

Server

Traffic_Buffer

Status_Sensor Derived_Status_Sensor Event_Sensor

Traffic_Server

Configuration Database

Sensor Database

Static Database

Knowledge Database

Figure 7. The Organization of the Knowledge Database. July 1, 1992

18

Page

19

tion/deallocation, reconfiguration, fault detection/recovery, statistics gathering, etc. There are five objects in the (D)-plane that play an important role in realtime network control. These are the resource control databases associated with configuration control, scheduling and buffer management, routing, flow control and admission control. As part of the Statistical Database, the resource control databases derive their information from the Dynamic Database as well as from the control parameters of the five resource control and management algorithms. Knowledge representation about the information transport and network entities is not explicitly given in the ISDN and OSI models. This is fundamental, however, for providing a unified framework for the plethora of network management tasks of the future giant integrated networks. We have stressed and exemplified the organization of the (D)-plane that supports real-time control. It is easy to see, however, how fault management is supported by the organization of the Knowledge Database.

1.3.4

Resource Management and Control - The M-Plane The Integrated Reference Model accommodates an abstraction of the management activities of INs as a separate (M)-plane. The resource management entities have a direct impact on the quality of service that is provided by the network. There are five classes of entities (modules) in the (M)-plane. They are characterized by the class of algorithms they support. These are configuration control, scheduling and buffer management, routing, flow control and admission control. Each of these algorithms has access locally to the complete state and event variables. Statistical information about the values of the parameters of other distributed controllers is available from the (D)-plane. Assuming that the configuration of the network is available, the modus operandi of the four resource sharing modules is postulated by the Principle of Asynchronous Resource Management. The get (read) and set (write) operations on the Statistical Database are asynchronous. Each of the four modules sets the latest values of its control parameters and gets the latest values of the other control parameters. For example, once the routing parameters are obtained, a new set of routing parameters is derived based on the current loading of the network. This can be obtained from the data stored in the Statistical Database. Therefore, the Statistical Database is logically operated as a blackboard [LES80]. The four modules just described appear as competing agents who take actions on this database independently of other agents. (M)-entities may vary from node to node. There may be classes of nodes in the

July 1, 1992

Page

20

network that are specialized to provide different types of functionality. The functional elements provided by any class of node may exist independently of any other class. Individual nodes may be part of the network architecture itself and individual protocols may require particular classes of nodes to operate. The ISDN Reference Model also has management functions in each node in addition to those provided in the (C)- and (U)- planes. These functions are local to each node and are not organized into a layered (or modularized) structure. The (M)-plane of the IRM is not layered. This contrasts with the layer management of the OSI Management Model [ISO89], [LIT89]. Here, no explicit representation of the monitored information and its abstractions are available.

1.4

Performance Modeling One of the most important factors in the design of integrated networks is the concept of quality of service, i.e., performance-oriented characterization of services. The performance concepts form one of the tenets of any design concept. In what follows, the main performance concepts will be introduced and their impact on network design highlighted.

1.4.1

Queueing Model The U-Plane functionality is modeled using a queueing network model with multiple user classes. The general queueing network model is shown in Figure 8. Its configuration is assumed to be given. The network is assumed to consist of I nodes. Packets are routed from node i to node j if node i is connected to node j, i,j = 1,2, ... , I. The queueing network model described here can be specified by a set of four parameters. These are: µ for servers, r for routing, λ for arrival rates and K for the number of users. The schedulers are distributed throughout the network. At node i, the scheduling is packet class (mark) dependent. Thus, if µi represents the scheduler at node i, the amount of resources allocated to class k packets is µki Here k = ( k 1, k 2, k 3, k 4 ) , where k 1, k 2, kand k 4 represent the user tag, the traffic 3 class, traffic type and the priority class, respectively. The mark k will be further elaborated upon in the next section. The routing parameters are also packet class dependent. Therefore, for

July 1, 1992

Page

21

source destination pair k 1 , transmitting class k 2 packets of type k 3 with priority kij

k 4 , the routing from node i to node j will be denoted by r .

λk3

λki λ

µk1

k1

µki λkI

µkI

λk2

Figure 8. Queueing Model

The flow control parameters will also be packet class dependent. Thus, λki will denote the rate of sending packets into the network. 1

2

3

4

Finally, K = ( K , K , K , K ) denotes the total number of source destination pairs (including virtual circuits), the total number of traffic classes, traffic types and priority classes in the network.

July 1, 1992

Page

1.4.2

22

Traffic Classes, Traffic Types and Priority Classes User information flows in the context of IN’s are isochronous and non-isochronous (for example video, voice, and graphics, data and facsimile). As already mentioned, the networks considered here transport packets that are characterized by an attribute called a mark. The mark of a packet consists of user tag, traffic class, traffic type and priority class attributes. The user tag specifies the source destination pair or the virtual circuit number. Four traffic classes are defined. The traffic class is an attribute that reflects the quality of service parameter. Three of the traffic classes, Class I, II and III, transport user traffic and are defined by a set of performance constraints. The fourth class, or Class C, transports traffic of the network management system (C stands for Control). While no specific performance criteria are associated with Class C traffic, it is assumed here that packets belonging to this class will not encounter congestion in the network. This can be achieved by proper network dimensioning. Class I traffic is characterized by 0% contention packet loss and an end-to-end time delay distribution with a narrow support. The maximum end-to-end time I

delay between the source and destination stations is denoted by S (see Figure 9a).Class II traffic is characterized by ε % contention packet loss and an upper bound, η , on the average number of the consecutively lost packets. It is also characterized by an end-to-end time delay distribution with a larger support II

than Class I. The maximum end-to-end time delay is S (see Figure 9b). Here, I

II

and ε η are arbitrarily small numbers and S < S . Contention packet loss represents packets that, because of network congestion, had an end-to-end delay I

II

greater than the maximum limit ( Sor S ) or were blocked by a buffer management system. For Class I and II traffic, there is no retransmission policy for lost packets. Class III traffic is characterized by 0% end-to-end packet loss that is achieved with an end- to-end retransmission policy for error correction. If requested, it is also characterized by a minimum average user throughput Γ and a maximum average user time delay T (see Figure 9c). Note that, Class I guarantees a service that is comparable to service in circuit switched networks. Class II guarantees a service with a limit on packet loss caused by a time delay constraint. It is of interest to video and voice sources were some packet loss is acceptable. Finally, Class III supports throughput/ time delay sensitive applications such as data and file transfers. The traffic type specifies broadcasting, multicasting or simply a source-destination pair.

July 1, 1992

Page

23

Time Delay Distribution

a) Class I Traffic

SI

Time Delay

Time Delay Distribution

Gap Distribution

ε Clipping

SII

b) Class II Traffic Blocking

Time Delay

η

Gap

Average Throughput

c) Class III Traffic Γ

Average Time Delay

T Figure 9. Characterization of the Quality of Service for User Traffic. July 1, 1992

Page

24

Priority classes are defined within traffic classes. Up to four levels of priority are proposed for each. If one bit is used to distinguish between priority classes, two levels of priority can be associated with each of the four traffic classes: C, I, II and III. For example, the difference between virtual circuit and datagram oriented Class III traffic can be easily handled using a one bit based priority scheme. If two bits are used, Class C and I could get, e.g., two priority levels and Class II and III four priority levels each. Priority calsses can be used for a fine tuning of the performance constraints. For example, class C could consist of traffic transporting monitoring information and control information (call set up, etc.). For Class I traffic, two different I1

I2

upper bounds for packet recovery, Sand S , can be specified. Similarly, for Class II and III traffic. Note that, if a video call is broken down into several subbands, synchronization is needed between different traffic classes. Assuming that there are five bands (one Class I and four Class II) available for transmitI2

ting a video session this can be achieved by setting S

IIi

= S , i = 1, 2, 3, 4 .

Similarly, priorities can be used by the buffer management system for dropping packets within a given traffic class. Therefore, for the class of networks discussed here, selective packet discarding policies are supported.

1.4.3

Network and User Performance Performance characteristics play a major role in the process of abstracting the details of the Integrated Reference Model. Two performance criteria are considered: network performance and user performance. Network performance reflects the global behavior of the network. Statistics for packets of the same traffic class in the entire network are used to calculate the associated performance indicators. The same statistics apply to user performance, but computation for the associated performance indicators is made for each user on the network. Furthermore, the perceived performance measures for ILANs can be formalized in terms of utility functions and costs, both parametrized by constraints, class of control strategies, and the structure of information on which the control algorithms are based [LAZ85], [LI88], [VAK87]. The utility functions and constraints considered here are associated with three traffic classes. The utility of the first class of packets is the probability of blocking (i.e., frequency of blocked calls) and both the maximum and the average throughput. The constraint is specified for 0% packet loss with a maximum acI

ceptable time delay, S . The utility of the second traffic class is the probability of blocking and the aver-

July 1, 1992

Page

25

age throughput. Both are a function of the traffic load of the different user classes as well as the resource sharing mechanism employed. The upper bound on the percentage of clipping (i.e., the percentage of packets considered lost by the system due to excessive delays) and on the average number of consecutively clipped packets arise as constraints. Clipping is a function of II

the acceptable time delay S between the moment the information flow was produced at the source node and the time of its reproduction at the receiving node. The utility of the third traffic class is characterized by the average throughput. The average time delay appears as a constraint and is again parametrized by the traffic load of the different traffic classes and the resource sharing mechanism in use.

1.4.4

Information Structures Where does the information about the state of the network reside? How much of the available information is used for real-time network control? The answer to this question depends on the class of algorithms under consideration and on the capabilities of the traffic control architecture. Here, we give broad ideas about possible information structures of the different control modules of the network. Scheduling and buffer management information is available locally at each node for real-time control. In addition, statistical information about the performance parameters of the schedulers and buffer managers in the network is available at each node. Routing information about the entire network is also available at each node of the network. This information is disseminated asynchronously to all network nodes. Flow control information is available in real-time at each node based on the control strategy in use. Typically, based on the number of packets already injected into the system but not yet acknowledged, the control processor estimates the total load in the network. Each controller might also have information about the loading of each switch fabric that is in the path of the controlled traffic flow. The number of users at each node is also known by each admission controller of the network.

July 1, 1992

Page

1.4.5

26

Network Control Parameters The fundamental network control parameters are: configuration control c, scheduling µ , routing r, flow control λ , and admission control K. The realization of the scheduling and buffer management schemes for the class of networks considered here will be discussed below. Two resources in the network are under control: the time of occupancy of switching and communication bandwidth and buffer space. For the multiclass network model described above, scheduling and buffer management resolves contention between the different packet classes. Scheduling consists of switching and communication bandwidth allocation, while buffer management refers to buffer space partitioning. The essential requirement on these resource sharing mechanisms is to guarantee the appropriate quality of service for each traffic class. The quality of service is monitored and controlled by the Traffic Control Architecture (TCA) of the network [LAZ90d]]. The TCA sees the network as a resource that has to be efficiently allocated among four traffic classes. The pie chart of Figure 10a shows the global view of network resources. Our assumption is that the main network resources: switching bandwidth, communication bandwidth and buffer space, are both observable and controllable. The TCA determines the relative allocation of the above resources to the four classes of service. The global view of network resource allocation has a distributed implementation. Each switch fabric has its own resource allocation. The TCA for each switch fabric finds the position of the boundaries between the Classes C, I, II and III that guarantees the required quality of service (see Figure 10b). In the dynamic environment of an integrated network, we envision that these boundaries will be continually changing. Asynchronous Time Sharing (ATS) refers to the manner in which scheduling and buffer management resolves contention between the different traffic classes. ATS calls for dynamic scheduling of the four traffic classes at each contention point in the network. Contention points could arise during the allocation of switching or communication bandwidth, or buffer space. The general concept of the proposed scheduling policy for switching and communication bandwidth allocation is shown in Figure 11a. The switching (or communication) bandwidth is divided into time periods called cycles. Each cycle is divided into four subcycles. During each subcycle (C, I, II, III), the switch fabric is allocated to the corresponding traffic class (C, I, II, III). For example, during subcycle C, Class C packets enter the switch fabric. The length of a subcycle is measured in cells. A cell represents the time required to serve

July 1, 1992

Page

Switching Bandwidth

C

I

III

II

Communication Bandwidth Buffer Space

a) Global View

b) Distributed View

Figure 10. Network Resource Allocation July 1, 1992

27

Page

28

(switch) one packet. The boundaries between subcycles are determined by a maximum length moveable boundary scheme. The TCA uses four variables (MAX C, MAX I, MAX II and MAX III) to determine the maximum boundary positions between subcycles. MAX C represents the maximum length (in cells) of subcycle C. MAX I represents the maximum length of subcycles C and I combined. MAX II represents the maximum length of subcycles C, I and II combined. MAX III represents the maximum length of the entire cycle. These variables are controlled by the TCA of the switch and will dynamically change according to the traffic load and mix. However, MAX C will be fixed and represents the maximum amount of bandwidth allocated to Class C traffic. In addition to the maximum length constraint, a moveable boundary scheme is used. This method switches subcycles when no more packets of the current traffic class are available. Thus, at the beginning of a cycle, the switch is allocated to Class C. The switch will serve Class C until either MAX C is reached or there are no more Class C packets available. At this point, the switch will change to subcycle I and serve Class I traffic. Class I traffic will be served until MAX I is reached or there are no more Class I packets available. When either condition occurs, the switch will change to subcycle II. When MAX II occurs or there are no more Class II packets available, the switch will start subcycle III. Finally, a new cycle begins when MAX III is reached or there are no more Class III packets available. For a given traffic class, the available bandwidth must be allocated fairly among the multiple access points. A method to limit access in order to guarantee users the appropriate bandwidth is proposed here. Each access point is C

I

II

III

assigned four LIMIT variables ( L , L , L , L ) by the TCA. These variables are defined as the maximum number of packets of each class that the access point can transmit during one cycle. For example, if the TCA assigns access point X II

an L value of 5, then access point X can transmit no more than 5 Class II packets each cycle. The TCA dynamically controls these variables according to the traffic load and profile. Note that the switching and communication bandwidth allocation policy described above is very general. This is because a controllable amount of switching (or communication) bandwidth can be provided for each traffic class and access point. For example, one can easily implement processor sharing disciplines by the proper assignment of the LIMIT control parameters. Each access point to a Switch Fabric or Communication Link requires a buffer organization that supports the four traffic classes. This can be conceptually represented as four separate FIFO memories. The total memory space at each access point, however, is considered a common buffer pool for the use of all

July 1, 1992

TIME

July 1, 1992

Subcycle C (Class C)

Subcycle I (Class I)

Subcycle II (Class II)

Subcycle III (Class III)

Subcycle C

CYCLE

Subcycle I

. . .

CYCLE

MAX C MAX I MAX II MAX III a) Switching and Communication Bandwidth Allocation

Class C

Class II

(BC)

Figure 11.

Resource Allocation

(BII)

Class I

Class III

(BI)

(BIII) Page 29

b) Buffer Management

Page

30

traffic classes. This pool is divided into four areas using space partitioning as shown in Figure 11b. Each buffer pool is assigned four THRESHOLD variables C

I

II

III

( B , B , B , B ) by the TCA. A THRESHOLD variable determines the maximum number of packets of a traffic class that are allowed in the common buffer. Once the THRESHOLD value is reached, no additional packets of that III

class are accepted. For example, if the B value is 7, then no more than 7 Class III packets are allowed in the buffer. The TCA determines the values of these variables using static or dynamic reconfigurability algorithms. In the static case, the variables are set according to the expected average traffic load and profile. In the dynamic case, the variables are continually changing according to the changing traffic load and profile on the network. In addition to the basic space partitioning among classes, the buffer management system handles the four level priority scheme proposed for each class. Thus, the space assigned to each class is subdivided into four queues which can be accessed independently. The priorities can be used as a basis for dropping packets within a given class. For example, if the THRESHOLD for a given class has been reached, a new arrival could be allowed into the buffer by dropping a lower priority packet of the same class that is already in the buffer. The asynchronous operation of the schedulers and buffer managers distributed throughout the system gives the network architecture its asynchronous nature. That is, at a given point in time, a resource within a switch fabric may be allocated to a certain traffic class while the same type of resource in a different fabric is allocated to another class. One possible scenario for a five node network is shown in Figure 12. The shading of each node indicates the traffic class switched. At the particular time instant, two nodes are serving Class I traffic, while one node is serving each of the remaining traffic classes. (The same principle applies to the communication schedulers.) The implementation of the general asynchronous resource sharing principle described above is explained in [LAZ90b]. A set of real-time traffic measurements are described in [LAZ90c].

Conclusions In this paper an object-oriented model of the architecture of integrated networks was proposed. The architecture is based on three principles: the Separation Principle, the Principle of Communication Transparency and the Principle of Asynchronous Resource Management. The fundamental distinction of the architecture is the structure of the Control plane. In this plane three fundamental tasks were identified and logically separated. These are the Resource Management and Control, Resource

July 1, 1992

Page

Class C

Class III

Class II

Class I

Figure 12. The Four Virtual Networks July 1, 1992

31

Page

32

Monitoring and Control and the Connection Management and Control tasks. The interface between Communications and Controls is facilitated by a layered architecture of the U- and C-planes. The two sets of controls (connection management and resource allocation) of the Control plane are organized around a Knowledge Database. The latter contains monitoring and management information about the network. This organization allows for a transparent communication between different real-time objects in the network. The concepts presented here represent our experience over the years with the implementation of real-time integrated local and metropolitan area networks [LAZ85], [PAT85], [LAZ90a], [LAZ90b]. The process of abstraction was not linear as might be inferred from this paper. Although measurements are already available about an implementation of the network architecture according to the principles espoused here, further study is needed to fully understand all the implications of the proposed architecture.

Acknowledgements The work reported here was started while the author was on a sabbatical leave at the NEC Central Research Laboratories in Kawasaki, Japan. I would like to thank Kojiro Watanabe for creating an excellent working environment. Special thanks go to Hiroyuki Okazaki for his patience and always helpful assistance. This work benefitted from the many interactions within the MAGNET and WIENER groups of the Telecommunication Networks Laboratory. In particular, I would like to mention the discussions with of John T. Amenyo, Rafael Gidron, Jay Hyman, Subrata Mazumdar, Giovanni Pacifici, Adam Temple and John S. White. The research reported here was supported by the National Science Foundation under Grant # CDR-84-21402.

References [BAC87]

[BER87] [BER89] [BAN88]

July 1, 1992

Bacelli, F. and Bremaud, P., Palm Probabilities and Stationary Queueing Systems, Lecture Notes in Statistics, Vol. 41, Springer Verlag,1987. Bertsekas, D. And Gallager, R., Data Networks, Prentice-Hall, Englewood Cliffs,1987. Berstekas, D. P. and Tsitsiklis, J.N., Parallel and Distributed Computation, Prentice-Hall, Englewood Cliffs,1989. Bancilhon, F., ‘‘Object-Oriented Database Systems’’, ACM,

Page

[BOV87]

[BOV90a]

[BOV90b]

[BRE81] [CCI85a]

[CCI85b]

[CCI85c] [CCI85d]

[ELM89]

[FER90a]

[FER90b] [FER90c] [FES89]

[HAL88] [HSI87a]

July 1, 1992

33

1988. Bovopoulos, A.D. and Lazar, A.A., “Decentralized Algorithms for Optimal Flow Control”, Proceedings of the 25th Conference on Communication, Control and Computing, University of Illinois at Urbana, Urbana, ILL, September 30 - October 2,1987, pp. 979987. Bovopoulos, A.D. and Lazar, A.A., “Optimal Resource Allocation for Markovian Queueing Networks: The Complete Observation Case, Stochastic Models, to appear. Bovopoulos, A.D. and Lazar, A.A., ‘‘On the Effect of Delayed Feedback Information on Network Performance,” submitted to the IEEE Transactions on Communications. Bremaud, P., Point Processes and Queues: Martingale Dynamics, Springer Verlag, 1981. CCITT Red Book, Integrated Services Digital Network, Recommendations of the Series I, Vol. III, Fascicle III.5, Geneva 1985. CCITT Red Book, Specifications of Signaling System No. 7, Recommendations Q.701-Q.714, Vol. VI, Fascicle VI.7, Geneva 1985. CCITT Red Book, Specifications of Signaling System No. 7, Recommendations Vol. VI, Fascicle VI.8, Geneva 1985. CCITT Red Book, Digital Access Signaling System, Recommendations Q.920-Q.931, Volume VI, Fascicle VI.9,1985. Elmasri, R. and Navathe, S.B., Fundamentals of Database Systems, The Benjamin/Cummings Publishing Company, New York, 1989. Ferrandiz, J.M. and Lazar, A.A., “Consecutive Packet Loss in Real-Time Packet Traffic,” Proceedings of the Fourth International Conference on Data Communication Systems and Their Performance, Barcelona, Spain, June 20-22, 1990. Ferrandiz, J.M. and Lazar, A.A., ‘‘Admission Control for RealTime Packet Traffic‘‘, submitted for publication. Ferrandiz, J.M. and Lazar, A.A., ‘‘A Study of Loss in N/GI/1 Queueing Systems’’, submitted for publication. Fehskens, L., ‘‘An Architectural Strategy for Enterprise Management’’, Integrated Network Management, I, Meandzija, B. and Westcott, J., editors, North-Holland, 1989, pp. 41-60. Halsall, F., Data Communications, Computer Networks and OSI, Addison-Wesley, Reading, MA,1988. Hsiao, M.T. and Lazar, A.A., “Optimal Flow Control of Markovian Queueing Networks with Multiple Controllers”, Proceedings of the Third International Conference on Data Communication

Page

[HSI87b]

[HSI89]

[HSI90]

[ISO89]

[KLE88] [LAB90]

[LAZ85]

[LAZ86]

[LAZ87]

[LAZ90a]

[LAZ90b]

[LAZ90c]

July 1, 1992

34

Systems and their Performance, Rio de Janeiro, Brazil, June 2225, 1987, pp. 357-372. Hsiao, M.T. and Lazar, A.A., “A Game Theoretic Approach to Decentralized Flow Control of Markovian Queueing Networks”, Proceedings of the PERFORMANCE’87, Brussel, Belgium, December 9-11, 1987, pp. 55-73. Hsiao, M.-T. and Lazar, A.A., “An Extension to Norton’s Equivalent,” Queueing Systems: Theory and Applications, Vol. 5, 1989, pp. 401-411. Hsiao, M.-T. and Lazar, A.A., “Optimal Flow Control of MultiClass Queueing Networks with Decentralized Information,” IEEE Transactions on Automatic Control, Vol. 35, No. 7, July 1990, pp. ISO Performance Management Raporteur Group, “Information Processing - Open Systems Interconnection - Performance Management Working Draft Document” (Third Draft), ISO/IEC JTC 1/SC 21, 18th January 1989. Klerer, M.S., ‘‘The OSI Management Architecture: An Overview’’, IEEE Network, Vol. 2, No. 2, March 1988, pp. 20-29. Labourdette, J.-F. P. and Acampora, T.S., ‘‘Wavelength Agility in Multihop Lightwave Networks’’, Proceedings of the INFOCOM’90, San-Francisco, CA, June 5-7, 1990. Lazar, A.A., Patir, A., Takahashi, T., and El Zarki, M., “MAGNET: Columbia’s Integrated Network Testbed”, IEEE Journal on Selected Areas in Communications, Vol. SAC-3, No. 6, 1985, pp. 859-871. Lazar, A.A., Mays, M.A. and Hori, K., “A Reference Model for Integrated Local Area Networks”, Proceedings of the International Conference on Communications, Toronto, Canada, June 22-25, 1986, pp. 531-536. Lazar, A.A., Amenyo, J.T. and Mazumdar, S., “WIENER: A Distributed Expert System for Dynamic Resource Allocation in Integrated Networks”, Proceedings of the IEEE Symposium on Intelligent Control, Philadelphia, PA, January 18-20, 1987, pp. 159-164. Lazar, A.A., Temple, A., and Gidron, R., “An Architecture for Integrated Networks that Guarantees Quality of Service”, International Journal of Digital and Analog Communication Systems, Vol. 3, No. 2, August 1990. Lazar, A.A., Temple, A., and Gidron, R., “MAGNET II: A Metropolitan Area Network Based on Asynchronous Time Sharing”, IEEE Journal of Selected Areas in Communications, Vol. SAC-8, No. 8, October 1990. Lazar, A.A., Pacifici, G. and White, J.S., “Real-Time Traffic Measurements on MAGNET II”, IEEE Journal of Selected Areas

Page

[LAZ90d] [LES80]

[LIT89]

[MAZ89]

[MAZ90a]

[MAZ90b]

[MIC88]

[PAT85]

[RIC83] [SCH87] [WAL88] [WU90]

July 1, 1992

35

in Communications, Vol. SAC-8, No. 3, April 1990, pp. 467-483. Lazar, A.A., “The Game of Networking”, in preparation. Lesser, V.R., and Erman, L.D., “Distributed Interpretation: A Model and Experiment,” IEEE Transaction on Computers, Vol. C-29, No. 12, December 1980, pp. 1144-1162. Little, G.S., “Information Processing - Open Systems Interconnection - Working Draft of the Configuration Management Overview”, ISO/IEC JTC1/SC21, 16th January, 1989. Mazumdar, S. and Lazar, A.A., “Knowledge-Based Monitoring of Integrated Networks”, Proceedings of the First International Symposium on Integrated Network Management, Boston, MA, May 14-17, 1989, pp. 235-243. Mazumdar, S. and Lazar, A.A., “Monitoring Integrated Networks for Performance Management”, Proceedings of the International Conference on Communications, Atlanta, GA, April 16-19, 1990, pp. 289-294. Mazumdar, S. and Lazar, A.A., “Knowledge-Based Monitoring of Integrated Networks for Performance Management”, submitted for publication to JSAC. Micallef, J., ‘‘Encapsulation, Reusability and Extensibility in Object-Oriented Programming Languages’’, Journal of ObjectOriented Programming, Vol. 1, No. 1, April/May 1988, pp. 12-35. Patir, A., Takahashi, T., Tamura, Y., El Zarki, M. and Lazar, A.A., “A Fiber Optic Based Integrated LAN for MAGNET’s Testbed Environment,” IEEE Journal on Selected Areas in Communications, Vol. SAC-3, No. 6, November 1985, pp. 872881. Rich, E., Artificial Intelligence, McGraw Hill, New York,1983. Schwartz, M., Telecommunication Networks:Protocols, Modeling and Analysis, Addison Wesley, Reading, MA,1987. Walrand, J., An Introduction to Queueing Networks, Prentice Hall, Englewood Cliffs, NJ,1988. Wu, S.F. and Kaiser, G.E., ‘‘Network Management with Consistently Managed Objects’’, Proceedings of the Global Telecommunications Conference, San Diego, CA, December 13, 1990.

Suggest Documents