An ATM WAN/LAN Gateway Architecture - CiteSeerX

5 downloads 100689 Views 94KB Size Report
Department of Electrical & Computer Engineering ... under contract F19628-92-C-0080, Digital Equipment Corporation, the Kansas ..... Apple Computer,.
An ATM WAN/LAN Gateway Architecture Gary J. Minden, Joseph B. Evans, David W. Petr, Victor S. Frost Telecommunications & Information Sciences Laboratory Department of Electrical & Computer Engineering University of Kansas Lawrence, KS 66045-2228

Abstract This paper describes a gigabit LAN/WAN gateway being developed for the MAGIC gigabit testbed. The gateway interfaces a gigabit LAN developed by DEC Systems Research Center and the MAGIC SONET/ATM wide area network. The UNI provides 622 Mb/s throughput between the LAN and WAN environments, and supports either a single OC-12c or four OC-3c tributaries. The architecture will initially support IP routing over the B-ISDN backbone, but it is not restricted to the TCP/IP protocol suite.

The authors can be contacted via e-mail at [email protected]. This research is partially supported by DARPA under contract F19628-92-C-0080, Digital Equipment Corporation, the Kansas Technology Enterprise Corporation, and Sprint.

1

Introduction

Computer communications networks are reaching transmission capacities exceeding one gigabit per second. Networks are traditionally partitioned into Local Area Networks (LANs) and Wide Area Networks (WANs) for a variety of economic and regulatory reasons. While LANs have been primarily oriented toward data traffic, they are increasingly viewed as the medium for the real-time traffic associated with multimedia applications. On the other hand, WANs have traditionally carried real-time circuit oriented traffic, primarily voice, but data traffic is gaining growing importance. Evolving standards and systems under the label “Broadband ISDN” (B-ISDN) will integrate data and real-time traffic to provide a variety of services to users. The convergence of integrated traffic and the possibility of new services has lead both exchange carriers and computer network providers to embrace technologies such as SONET (Synchronous Optical NETwork) and particularly ATM (Asynchronous Transfer Mode) for both local and wide area networks. The use of similar technology in the LAN and WAN environments provides the opportunity for geographically distributed high performance networks. A key element in realizing this goal is the development of efficient gateways, or user-network interfaces (UNIs), between the LAN and WAN environments; although the basic technology used on both sides of the gateway may be similar, the operational aspects of LANs and WANs are significantly different. The gateway architecture described in this paper supports communication between LANs and WANs operating at gigabit per second rates.

1.1 Gigabit LAN/WAN Overview The Multidimensional Applications and Gigabit Internetwork Consortium (MAGIC) is a group of industrial, academic, and government organizations participating in gigabit network research. The MAGIC backbone network operates at 2.4 Gb/s and each site on the network includes LANs or

1

Minnesota Supercomputer Center Minneapolis, Minnesota

EROS Data Center Sioux Falls, South Dakota 2.4 Gb/s SONET/ATM backbone

= Campus Network Future Battle Lab Ft. Leavenworth

University of Kansas Lawrence, Kansas

US Sprint Kansas City

Figure 1: MAGIC Network hosts communicating at gigabit per second rates. The MAGIC network is depicted in Figure 1. The University of Kansas (KU) will deploy an experimental gigabit LAN called the AN2, provided by Digital Equipment Corporation and developed by the DEC Systems Research Center [1]. The AN2 is a local area network based on ATM technology [7]. The KU network is shown in Figure 2. The network will consist of several switches (initially two), connected by interswitch links operating at 1 Gb/s. DECStation 5000 hosts equipped with AN2 host adapter boards will be attached to the switches. These hosts will communicate locally via the AN2 switches, and with remote MAGIC sites via a LAN/WAN gateway developed at KU.

1.2 The LAN/WAN Interface The gateway supports B-ISDN ATM traffic between the KU local area network and the MAGIC wide area network at SONET OC-12 or OC-12c rates (622.08 Mb/s) [2, 13]. The architecture of the gateway is based on the existing AN2 gigabit interswitch line card design. The gateway and associated hosts support signaling and connection management procedures for the LAN/WAN 2

To MAGIC WAN

Host

Learned Hall

Ellsworth Hall Switch Switch

Host

OC-48 possible future expansion OC-12 Gateway Switch

Host DS3 Ethernet FDDI

Host

Switch

Host

Host

Host IP Router Nichols Hall

Figure 2: University of Kansas AN2 Configuration interface. A variety of research issues are being addressed through implementation and application of the LAN/WAN interface. A significant issue to be addressed in the testbed is the internetworking of the connection-oriented WAN environment and connectionless LAN environments. Connection setup procedures are being developed to provide virtual circuits for IP datagram traffic traveling from LAN to LAN via the B-ISDN WAN. These procedures will initially focus on permanent virtual circuits (PVCs), but this will later be extended to switched virtual circuits (SVCs). In addition to the issues of simple connection management, more complex network control issues that arise in an ATM network need to be addressed. In particular, dynamic bandwidth allocation mechanisms promise to provide LAN/WAN services more economically. The gateway architecture is designed to allow the testing and evaluation of dynamic bandwidth allocation algorithms. The architecture proposed for B-ISDN is a connection-oriented (CO) transmission service [9]. Most data communications based LANs and common protocols (e.g. IP), however, implement a

3

connectionless (CL) service. A well recognized [5, 8, 10] challenge in B-ISDN is the integration of connectionless services over a connection-oriented B-ISDN. Agents are needed between connectionless and connection-oriented services to manage and control the flow of information. It is further expected that the data rates of future LANs (i.e., integrated access points) and B-ISDN will be of comparable orders of magnitude, so that assigning the peak access rate to each connection would result in a significant waste of resources. The challenge in this new environment is to match the unknown dynamics of the internet connectionless traffic to the characteristics of the virtual circuit (VC) carrying this traffic in the connection-oriented system.

2

The AN2 Gigabit Local Area Network

The AN2 LAN is a switch based local area network. Hosts are connected to switches by one or more 155 Mb/s full duplex links, and switches are interconnected by 1 Gb/s links. Switches are 16 by 16 port crossbars and can switch at a maximum aggregate rate of 16 Gb/s and greater than 95% throughput. The AN2 is a virtual-circuit based system. Packets, received from the host, are segmented by host adapters into streams of ATM cells that are transmitted on a virtual circuit using AAL 5 [12]. The host adapter transmits cells to its attached switch. Switches then move the cells through the network and deliver them, in order, to the destination host adapter via a virtual circuit. The receiving host adapter re-assembles packets from the cell stream and, once a packet is completely re-assembled, sends the packet to the host’s main memory. Host adapters plug into the Turbochannel I/O bus of the DECStation 5000. Traffic is divided into two classes, sporadic and periodic or guaranteed bandwidth traffic. Hosts can request at call setup time a guaranteed bandwidth in units of 1 Mb/s. If the bandwidth is available, switches along the route cooperate to establish individual crossbar schedules to provide the guaranteed bandwidth.

4

2.1 AN2 Switches Switches implement the following features:

 ordered delivery of cells within a virtual circuit  no head-of-line blocking  guaranteed bandwidth  hop-by-hop based flow control  shortest path routing The following three types of line cards plug into the switch crossbar: (1) link line cards have a single full duplex link operating at 1 Gb/s, (2) host line cards have four full duplex links operating at 155 Mb/s, and (3) the AN2/SONET Gateway operating at 622 Mb/s using the SONET transmission protocol. Line cards consist of an input side and an output side. The input side receives cells over the link, routes those cells, queues them, and sends them through the crossbar. The output side receives cells from the crossbar, buffers them, transmits the cells on the link, and manages the flow control mechanism. Queues will not overflow because the flow control mechanism (described below) will not allow it. Cells are synchronously switched through the crossbar with a slot period of 520 ns. During arbitration, lasting one slot period, each line card requests access to each output line card for which it has traffic. The arbitration mechanism will result in either no connection or a single connection to an output line card. By posting requests to each output for which there is traffic and implementing a distributed arbitration mechanism [1], the AN2 avoids the usual head-of-line blocking in the input queues. Cells from the crossbar are immediately transmitted on the output link. Switching slots are grouped into frames of 1024 slots. Slots within a frame can be dedicated to a specific connection between an input line card and output line card. This mechanism supports the guaranteed bandwidth feature. 5

The AN2 implements a strict, window flow control mechanism on a link by link and virtual circuit by virtual circuit basis. During call setup, buffers are allocated on the input side of each line card along the route for the virtual circuit. The number of cells buffers necessary in each line card depends on the maximum bandwidth on the virtual circuit and the distance between output and input cards. The necessary number of buffers Nb is:

Nb = (C bits=second  2D km  4:95s=km)=(53 bytes=buffer  8 bits=byte); where C is capacity in bits per second, D is distance in kilometers, 4.95 s/km is the propagation velocity of light in fiber, and the denominator is the number of bits per cell buffer. For a one kilometer link operating at 155 Mb/s, four cell buffers per virtual circuit are needed on each line card input along the route. The buffer sizes are thus relatively small for LATM systems. A line card will not transmit a cell on a link unless it is sure there is a buffer at the receiving end to store that cell. Line card outputs maintain an account balance of the number of buffers available at the receiver for each virtual circuit. As each cell is transmitted, the account balance for that virtual circuit is decremented. When the account balance reaches zero, the output notifies the input line card, through the crossbar, to stop the virtual circuit. The input line card marks that virtual circuit stopped and will not attempt to transmit further cells on that virtual circuit until it is started again. When an line card sends a cell through the crossbar the virtual circuit identifier is sent to that line card’s output side. The output side piggybacks the virtual circuit identifier of the forwarded cell on the cell stream going back to the far end (an empty cell will be used if there is no return traffic). This is an acknowledgment that a cell buffer on the forwarded virtual circuit has been freed. Note that other mechanisms are possible, such as batching acknowledgments into a return cell. The input side of the line card at the far end strips off the piggybacked acknowledgment and sends it to the output side. The output side increments the buffer account balance for the virtual circuit. If cells are flowing smoothly, an acknowledgment for a virtual circuit will arrive at the 6

account just before the next cell on that virtual circuit is transmitted. Acknowledgment loss and account resynchronization between output and input are beyond the scope of this paper. Each line card has a microprocessor, called the LCP, to control, monitor, and manage the line card. The LCP is involved in call setup, call tear down, route finding, resource allocation, periodic bandwidth allocation, monitoring the line card, and performance measurement.

3

The Gateway Architecture

The AN2/SONET Gateway is a hardware device that connects the Digital Equipment Corporation AN2 Local ATM network to the SONET based B-ISDN ATM network. The purpose of the gateway is to provide a means for data to move between the AN2 and the SONET based wide area network. The AN2/SONET Gateway supports the following features:

 operation at the SONET OC-12/OC-12c (622.08 Mb/s) capacity on the wide area side via fiber optic connection  operation within the DEC SRC AN2 Local ATM switch by connecting to the AN2 switch backplane  experimental techniques for dynamic bandwidth allocation  experimental techniques for interoperability between connection-oriented and connectionless protocols; in particular, the gateway supports TCP/IP traffic, but it is not restricted to that protocol suite  experimental signaling protocols for call setup and call parameter negotiations  measurement of network performance The AN2/SONET Gateway is a single card that plugs into an AN2 switch port. The gateway, shown in Figure 3, contains three primary sub-systems: the receive section, the transmit section, and the line card processor (LCP) section.

7

gateway card

AN2 Crossbar, Data

Transmitter

AN2 Control

Section

Switch Backplane and AN2

credits

Line Card Processor

AN2 Control

Receiver

AN2 Crossbar, Data

Section

SONET OC-12 or OC-12c

Network Termination Equipment

statistics

SONET OC-12 or OC-12c

Figure 3: AN2/SONET Gateway

3.1 Transmit Section The transmit section, shown in Figure 4, connects the AN2 switch crossbar to the transmit SONET Network Termination Equipment (NTE) of the WAN provider. The transmit section interface to the NTE is via single mode optical fiber at OC-12 rates (622.08 Mb/s). The transmit section may optionally connect to the NTE at OC-3 rates (155.52 Mb/s). The transmit section will normally use clocks derived from the receive section, but will also have a crystal controlled local clock oscillator. The transmitter receives ATM cells from the AN2 crossbar. The cells are buffered and merged with the SONET overhead information stream. The combined stream forms a SONET frame. The LCP will be able to receive cells from the AN2 crossbar. The transmit section will maintain status for each possible virtual circuit (VC). The status information will include the number of buffers available at the remote end. This will allow the transmitter to participate in the AN2 flow control strategy, within the limits imposed by memory, latency, and the WAN. The transmit section will forward cell acknowledgements issued by the receive section when appropriate. The transmit section supports a single OC-12c stream or four OC-3c streams multiplexed into a single OC-12. 8

SONET Overhead RAM

LCP LCP

32

SONET AN2 Crossbar Data

32

Crossbar

32

32

SRAM

Interface

32

Transpose and ATM

SONET

8

Formatter

SONET Scrambler

SONET E/O

OC-12 or OC-12c

Scrambler

Statistics

AN2 Control

Crossbar Aribitration

Queue

LCP

and Credit/Bandwidth

Bus on Receive Side

Management

Credit RAM

BW Schedule RAM

Figure 4: Gateway Transmitter Architecture The AN2 Crossbar Interface connects the gateway card to the AN2 switch backplane. Cells are received at the AN2 Crossbar Interface in thirteen clock cycles, 32 bits per cycle. The transmit section only accepts cells destined for the gateway. The Crossbar Interface temporarily buffers partially received cells arriving from the AN2 crossbar. Upon completion of a cell arrival, and given successful arbitration, the Crossbar Interface writes the cell to the SRAM unit. The transmit section only participates in arbitration if there are sufficient buffer resources in the SRAM. Statistics on cell and packet arrivals are extracted at the Crossbar Interface and forwarded. The SRAM unit is used to buffer cells for a number of virtual circuits. Management of the buffers are on a virtual circuit basis, so that flow control on one VC will not hold up another VC. The SRAM unit also serves as a rate adaptation unit. The AN2 operates at a clock rate of 40 ns (25 MHz) per 32 bit word and a cell rate of 520 ns, since 13 words per cell are forwarded across the AN2 crossbar (the HEC byte is not passed through the crossbar). In contrast, the SONET

9

transmission clock is 12.86 ns per byte or 681.6 ns per cell (peak rate). Cells are loaded into the SRAM unit at the AN2 rate and read from the SRAM unit at the SONET rate. The SRAM unit is controlled by the Queue and Credit/Bandwidth Management unit. The Queue Management module schedules memory accesses to the SRAM unit and maintains pointers to the cell stream locations in SRAM. The Credit/Bandwidth Management module subtracts credits from the credit bank for end-to-end flow control, and generates batches of acknowledgements (credits). This unit also maintains the scheduling tables for i/m bandwidth control [6]. The LCP can access the various tables through the Queue and Credit/Bandwidth Management unit. The SONET Transpose and ATM Scramble unit operates in one of two modes: a single OC-12c ATM stream, or four OC-3c ATM streams multiplexed into an OC-12. The mode is selected at system configuration time. When operating in OC-12c mode, the Transpose unit acts as a two cell ping-pong buffer, so that one cell can be byte transposed and injected into the SONET frame while a second is read from the SRAM. In four by OC-3c mode, four streams are multiplexed into a single byte stream for encapsulation in a SONET frame. When operating in the four by OC-3c mode, the Transpose unit buffers two sets of four ATM cells. In this case, byte interleaving can be performed by simply extracting a byte from each stream in a round-robin fashion. In either mode, byte multiplexing is performed on the ATM streams using multiplexers under the control of a finite state machine. The payload section of each ATM cell stream is scrambled according to the prescribed self-synchronous scrambler polynomial [6]. The SONET Overhead RAM is a fast, dual-ported SRAM containing the SONET overhead bytes. The LCP loads the SONET Overhead RAM with the proper section, line, and path overhead bytes. Sixteen overhead buffers are provided so that the contents can be altered while the system is in operation. The SONET overhead is multiplexed with the byte interleaved ATM cell streams via a tristate bus to generate the input stream to the SONET Formatter. Because the path frame is aligned with the section and line overhead at transmission, the control for the multiplexing is

10

straightforward. The SONET Formatter takes payload and overhead data from the preceding stages and performs the remaining functions required to complete a SONET frame. In particular, the SONET Formatter performs the combinational operations necessary to fill the parity bytes of the path, section, and line overhead. The SONET Scrambler scrambles the SONET signal according to the standard polynomial [13], and performs the conversion from the byte-wide parallel stream to a serial stream. The serial data is then fed into the Electrical-to-Optical interface. The transmit section is implemented using a combination of Xilinx FPGAs, a commercially available SONET scrambler integrated circuit, and commercially available memory chips.

3.2 Receive Section The receive section, shown in Figure 5, connects to the received signal from the wide area SONET NTE. The interface to the NTE is via single mode optical fiber at OC-12 or OC-3 rates. Timing information (bit, byte, and frame) is extracted from the received SONET signal, and is used in both the receive and transmit sections. The receive section extracts the SONET overhead and the SONET payload from the incoming stream. The SONET payload is processed as ATM cells in accordance with evolving SONET/ATM standards. The ATM header is checked for errors prior to further processing. ATM cells are buffered on a virtual circuit basis. The destination of the received ATM cells is determined by a routing table. ATM cells forwarded through the switch or sent to the LCP are acknowledged to the remote end via a return path through the transmit section. The receive section supports a single OC-12c stream or four OC-3c streams multiplexed into a single OC-12. The Optical-to-Electrical converter receives and detects the optical serial bit stream and outputs an electrical serial bit stream. A bit clock is recovered from the received signal. The SONET 11

32

SONET Overhead RAM

SONET Termination

LCP

SONET SONET OC-12 or OC-12c

O/E

and Synchronize

Statistics

ATM Cell

Descramble 32

Delineation and

32 32

Descramble

AN2

32

32

SRAM

VRAM

Crossbar Data

32

LCP

Credit Management

32

32

Statistics

LCP Credit Management

Cell Stream Management

Queue Management and Crossbar Aribitration

AN2 Control

Queue RAM

Figure 5: Gateway Receiver Architecture Descrambler and Synchronization unit searches for and synchronizes itself with the SONET frame synchronization pattern. At the beginning of each frame, the start of frame is detected and an indication signal is asserted. A frame lock signal is also asserted as long as synchronization is maintained. This unit also generates a byte clock that is used throughout the gateway system. The SONET Descrambler and Synchronization unit also applies the standard descrambling function to the received signal and converts the serial bit stream to a byte parallel stream for further processing. Finally, this unit checks the SONET section and line parity bytes. The ATM Cell Delineation and Descramble unit searches for valid ATM cell headers in the received payload byte stream, using the standard cell synchronization method. In particular, the Cell Delineation unit performs the header error check CRC, and performs comparisons until a match is found in the byte stream. When a prescribed number of consecutive matches occur, synchronization is indicated. If a prescribed number of consecutive matches then fail, loss of synchronization is indicated. The system supports both a single OC-12c and four OC-3c streams. ATM cell stream alignment across the four OC-3c payload streams is not assumed. Idle cells are 12

dropped by the cell delineation unit and are not forwarded for further processing. ATM cells with incorrect headers are dropped; no error correction is curre ntly attempted. The Descrambler unit applies the cell descrambler polynomial to the cell payload. Once they are delineated and descrambled, partial cells are written to the SRAM unit for buffering and routing. The SRAM unit also provides the rate adaptation between the SONET clock rates and the AN2 crossbar clock rates. The cells containing batches of credits which were collected by the Credit Management unit on the transmit side are inserted on the input bus to the SRAM unit, as are the cells generated by the Statistics unit, and cells that are transmitted by the LCP. The SONET Termination unit extracts the SONET overhead bytes of the received frame, and writes those bytes to the SONET Overhead SRAM. The SONET Overhead SRAM buffers the overhead information for subsequent reading and processing by the LCP. The SONET Termination unit uses the path frame pointer in the SONET line overhead to determine the location of the path overhead. In the case of an OC-12c stream, this is a single pointer, but in the case of four OC-3c streams, pointers to four offset frames must be tracked. This unit also checks the SONET path parity bytes, and informs the cell delineation unit about path slippage in the SONET frame. The SRAM unit performs buffering of the ATM cell streams prior to buffering on a per VCI basis in the VRAM unit. The SRAM unit is managed by the Cell Stream Management unit, which schedules memory accesses for the cell streams and provides indication when cells are ready to be buffered in the VRAM. The Cell Stream Management unit also controls VC extraction from the received cells. The VRAM Buffer is used to provide sufficient buffering for ATM cells, so that congestion in the AN2 network will not cause cell loss due to buffer overflow. As in the standard AN2 line card, cells are buffered on a per VCI basis, so that flow control on one VC will not effect other VCs. The Queue Management and Crossbar Arbitration unit controls the VRAM Buffer and maintains

13

the queue tables. This unit also manages the crossbar arbitration cycle and controls the write operation from the VRAM to the AN2 crossbar. The Queue RAM holds pointers to the cells in the VRAM; this structure allows buffer resources to be dynamically allocated. The receive section is implemented using a combination of Xilinx FPGAs, a commercially available SONET descrambler integrated circuit, a three port video dynamic access memories for the VRAM Buffer, and some other commercially available memory chips.

3.3 Line Card Processor The Line Card Processor (LCP) manages the resources of the receive section, transmit section, and communications paths. It is responsible for setting up circuits, releasing circuits, monitoring circuits, allocating bandwidth to circuits, and other network management operations both within the AN2 and with the WAN. The LCP communicates with other switch processors via ATM cells. The LCP can receive cells from the AN2 crossbar, and transmit signals into the AN2 crossbar. The LCP communicates with the WAN via cells which pass through the AN2 crossbar, and hence are subject to the standard resource management logic. The LCP can also communicate with SONET equipment via the path, section, and line overhead bytes. The LCP is composed of a general purpose RISC processor and support chips.

4

LAN/WAN Interface Issues

The hosts on the various networks that comprise the MAGIC gigabit testbed will initially communicate using the TCP/IP suite [4]. The IP datagrams generated by hosts will be carried on the connection-oriented ATM LAN and WAN networks. The gateway and associated hosts support the assignment and mapping of IP traffic to virtual circuits, the provisioning of virtual circuits, and the dynamic management of virtual circuits. This section describes how connectionless IP services

14

Host

IP Router

on AN2

on AN2

AN2 / B-ISDN Gateway

LAN / B-ISDN Gateway

IP Router

Host on

on Remote LAN

Remote LAN

TCP

TCP

IP

IP

IP

IP

IP

IP

AAL5

AAL5

AAL5

AAL5

AAL5

AAL5

AN2

AN2

AN2

LAN

LAN

LAN

AN2

ATM

ATM

LAN

Figure 6: Gateway Protocol Stack will utilize the connection-oriented ATM service provided by the AN2 and B-ISDN WAN.

4.1 Internetworking The hosts on the MAGIC network will use AAL 5, the Simple and Efficient Adaptation Layer (SEAL) [12] for carrying IP datagrams on ATM cell streams. An intermediate IEEE 802.2 LLC layer may also be supported, for interoperability with the IEEE 802 protocols. The protocol stack is shown in Figure 6. In order to direct IP packets to local AN2 hosts or to remote hosts via the WAN, IP routing functionality will be provided in the network. Initially, IP routing will be accomplished using a selected host (or hosts) on the local subnet, as shown in Figure 7. This corresponds to the current practice for networks connected to the Internet. Future research will explore other options which will alleviate the potential performance bottlenecks imposed by the use of a single router. An number of possibilities exist, for example, multiple routers, each managing a set of virtual circuits. The initial MAGIC testbed configuration will use permanent virtual circuits (PVCs) across the wide area network. It is envisioned that the provisioning of PVCs will be done using SNMP [3] according to the MIB published by the ATM Forum [6]. The gateway will support signaling 15

Host E

Host A #2

Host B

WAN

Host F #4

SONET/ATM #1

#3

Host G #5

Host C

Host D IP Router D

Host H IP Router H

Figure 7: Initial Routing Configuration for MAGIC according to this standard. Later MAGIC testbed configurations will include ATM switches, and hence use switched virtual circuits (SVCs). The gateway LCP will support the signaling required, most likely the proposed Q.93B extensions to Q.931 [11].

4.2 Bandwidth Allocation Many data protocols and services are connectionless; connection-oriented applications are frequently built on top of these connectionless protocols. Connectionless applications tend to generate short bursts of packets followed by idle periods. Even connection-oriented applications, such as file transfer applications, may require high data rates at some times and lower data rates at others. The evolving local ATM networks and wide area networks are connection-oriented, so services must be provided to interface connectionless protocols to connection based communication networks. This section outlines the issues involved in providing such a service and the implications on the gateway architecture. 16

In the B-ISDN environment, it will be necessary to match the dynamics of the connectionless traffic to the characteristics of the virtual circuit carrying this traffic in the connection-oriented system. The common solution [5, 8] to this problem is to initially request a modest amount of connection-oriented network bandwidth and dynamically adjust the requested capacity as the interface service detects the need for additional capacity. Renegotiation of call parameters during a session is a facility expected in B-ISDN [7]. We plan to implement a service on the local area network, called the CL/CO service, to monitor the bandwidth requirements of connections through the system and adjust bandwidth allocations among those connections. Connectionless to connection-oriented services and the dynamic bandwidth allocation process should posses attributes listed below. The CL/CO service should:

 use infrequent signaling between the CL/CO service and the connection-oriented service,  not be sensitive to the specific nature of the traffic statistics,  be able to operate at gigabit/sec speeds,  be insensitive to the relative latencies of gigabit networks,  induce minimum latency,  not require extensive special switch interaction. The gateway is designed to provide the data necessary to implement the dynamic bandwidth allocation CL/CO service. The CL/CO service will be tested within the MAGIC network by fixing the WAN capacity available to the gateway and executing several remote applications simultaneously. During the tests we will experiment with several dynamic allocation algorithms and signaling protocols.

17

4.3 Performance Statistics Collection In order to develop fundamental base of knowledge about the nature of LAN/WAN traffic statistics, and to provide a method to evaluate the effectiveness of the dynamic bandwidth allocation and management algorithms just discussed, the gateway supports statistics gathering functions. Statistics are collected on both a per packet and a per cell basis, using the payload type identifier specified in the AAL 5 definition [6]. The statistics that are targeted for collection are:

 packet interarrival time series  packet length distribution over time  packet delay statistics, including evolution of statistics over time  cell interarrival time series  cell delay statistics, including evolution of statistics over time  credit queue statistics (length, idle time, time evolution)  loss statistics across WAN – those due to bit error – those due to WAN congestion – evolution of statistics over time (loss bursts) The statistics are buffered at the gateway for a short period of time, and then forwarded using a dedicated VCI to a host for bulk collection and analysis.

5

Conclusions

This paper has described the gateway architecture for the interconnection of a DEC AN2 gigabit local area network and the 2.4 Gb/s MAGIC gigabit wide area network. The gateway is designed to support the transport of ATM LAN traffic over a B-ISDN wide area network at SONET OC-12 18

rates. The gateway can be configured to support a single SONET OC-12c tributary, or four OC-3c tributaries multiplexed into an OC-12 frame. While the MAGIC testbed will use the TCP/IP suite and AAL 5, the gateway architecture is designed to support a variety of higher level protocols and adaptation layers.

References [1] T. Anderson, C. Wickey, J. Sax, and C. Thacker. High speed switch scheduling for local area networks. In Proc. ASPLOS, 1992. [2] R. Ballart and Y. Ching. SONET: Now it’s the standard optical network. IEEE Commun. Mag., 27(3):8–15, Mar 1989. [3] J. D. Case, M. S. Fedor, M. L. Schoffstall, and J. R. Davin. Simple Network Management Protocol. Internet Working Group Request for Comments 1157, Network Information Center, SRI International, Menlo Park, California, May 1990. [4] D. E. Comer. Internetworking with TCP/IP, Volume I. Prentice-Hall, Englewood Cliffs, New Jersey, 1991. [5] P. Crocetti, G. Gallassi, and M. Gerla. Bandwidth advertising for MAN/WAN connectionless internetting. In Proc. IEEE INFOCOM, Bal Harbor, Florida, Apr 1991. [6] ATM Forum. Network Compatible ATM for Local Network Applications. Apple Computer, Bellcore, Sun Microsystems, Xerox, Apr 1992. [7] 1990 CCITT Study Group XVIII Recommendation I.150. B-ISDN Asynchronous Transfer Mode Functional Characteristics. CCITT, Geneva, 1990. [8] L. Mongivoni, M. Farrell, and V. Trecorido. A proposal for the interconnection of FDDI networks through B-ISDN. In Proc. IEEE INFOCOM, Bal Harbor, Florida, Apr 1991. [9] M. T. Mullen and V. S. Frost. Dynamic bandwidth allocation for B-ISDN based end-to-end delay estimates. In Proc. IEEE ICC, Chicago, Jun 1992. [10] G. M. Parulkar and J. Turner. Towards a framework for high speed connection in heterogeneous networking environments. In Proc. IEEE INFOCOM, Ottawa, Canada, Apr 1989. [11] 1989 CCITT Study Group XI Recommendation Q.931. Specifications of Signaling System No. 7. CCITT, Geneva, 1989. 19

[12] ANSI Committee T1 Contribution T1S1.5/91-449. AAL 5 – A New High Speed Data Transfer AAL. Bellcore Technical Reference Issue 2, IBM et al, Dallas, Texas, Nov 1991. [13] Bellcore Technical Reference TR-NWT-000253. Synchronous Optical Network (SONET) Transport Systems: Common Generic Criteria. Bellcore Technical Reference Issue 2, Bellcore, Dec 1991.

20

Suggest Documents