A Networked Frame Buffer with Window Management Support

3 downloads 9779 Views 105KB Size Report
support. The frame buffer is implemented as an inde-. pendent node of a proprietary desk area network ... cations such as desktop video conferencing. 1.
A Networked Frame Buffer with Window Management Support Michaela Blott, Hans Eberle Institute for Computer Systems Swiss Federal Institute of Technology (ETH) CH-8092 Zurich, Switzerland E-mail: [email protected], [email protected]

KEYWORDS Distributed Multimedia, Frame Buffer, Quality of Service, Teleconferencing. ABSTRACT When processing multimedia data streams in a distributed system it is often advantageous if data, in particular, continuous data such as video and audio data, can be sent directly from the source to the sink device. An example is a video conferencing application with source devices (cameras, microphones) transmitting directly to sink devices (frame buffers, speakers) without involving an intermediate processor. This approach saves communication bandwidth and processor cycles. Further, it reduces latency and also latency jitter. Unfortunately, systems in use today do not support this kind of direct communication. Even worse, current systems do not provide real-time or quality of service (QoS) guarantees making it impossible to process or transfer continuous data within specified latency times. This paper presents the design and implementation of a networked frame buffer with window management support. The frame buffer is implemented as an independent node of a proprietary desk area network (DAN) which offers scalability and QoS guarantees. As an independent node, the frame buffer is connected directly to the network and can be accessed by any other node. Further, the core functions of window management are implemented directly on the frame buffer. This avoids processor intervention during the transmission and display of video data. Thus, the frame buffer is an ideal output device for distributed multimedia applications such as desktop video conferencing. 1. INTRODUCTION Computer systems have become powerful enough to process multimedia data streams including continuous data streams such as video or animated graphics. For this task, their communication architecture, however, could be improved. In a truly distributed multimedia system sink and source devices communicate with each other directly, without the involvement of a processor. In many cases, data streams can be reproduced in the

same form they were produced without having the main processor perform any operations on them. For example, in a video conferencing application video streams can be sent directly from camera devices to frame buffers. This application also shows that multimedia data streams are often generated in a distributed way, that is, by different sources and not only by the processor on which the window manager is run. Direct communication between source and sink devices can safe bandwidth and reduce the load on the processor. Further, by avoiding the streams to be processed by intermediate nodes, latency and latency jitter is reduced. When running multimedia applications on current systems the lack of QoS guarantees poses a serious problem. When processing continuous data certain real-time constraints have to be met. Typically, operations on such data have to be executed periodically and in bounded time. Therefore, the software and hardware involved need to provide QoS guarantees. In this respect, current general-purpose system architectures including operating systems provide little support since data is operated on with best effort and, thus, the time taken to produce, transmit and reproduce data cannot be predicted. It is mainly the general-purpose operating systems in use today that make it difficult to predict latency times. The lack of QoS guarantees and, with it, unpredictable transmission and processing times lead to visual distortions as exhibited by current implementations of desktop video conferencing that are mainly software solutions, sometimes aided by additional plugin boards, that run on workstations under standard operating systems [Beadle 1995, Taylor and Tolly 1995]. While these implementations offer low cost and easy availability, limited processing and communication resources are the reason for further shortcomings such as low-resolution pictures and support for only a small number of video windows. Continuous data can only be processed in a satisfactory way by a seamless QoS architecture that covers all system components, hardware as well as software from end to end, along the way from the source to the sink. More recent developments in local area networks such as ATM technology recognize this need by offering

QoS guarantees in the form of service or traffic classes that typically differ in guarantees regarding bandwidth and latency bounds. Unfortunately, these service guarantees do not extend into the workstations connected to the network and are lost as soon as the data enters the workstations. Our research group is developing a distributed computing system with the goal to provide a seamless QoS architecture for running distributed multimedia applications. The distributed computing system, known as Switcherland [Eberle 1996, Eberle and Oertli 1996], uses an interconnection structure based on crossbar switches which offers the following advantages. Firstly, the switches are cascadable and, therefore, offer scalability in bandwidth. Secondly, since any node is directly accessible by any other node, data can be sent from a source node to a sink node without getting a processor involved. Thirdly, real-time or QoS guarantees are offered in the form of bandwidth guarantees and bounded end-to-end delays. To avoid processor intervention when transmitting and displaying video data the processor's tasks have to be taken over by other components. The tasks can be divided into two main groups: data flow coordination and window management. The former refers to all operating system routines typically needed to transmit video data from a network interface to a frame buffer in a standard workstation environment. These routines are superfluous in Switcherland, in that all components, in particular the frame buffer, are now implemented as independent nodes and are directly accessible by any other node in the network. In the proposed architecture, window management is partly performed in software and partly in hardware directly at the frame buffer. To keep the hardware simple, the frame buffer implements the minimum functionality only, that is, windowing and clipping. The main focus of this paper is the design and implementation of a frame buffer for the Switcherland distributed computing system. Chapter 2 gives a brief overview of the Switcherland architecture and shows how the frame buffer fits into it. Chapter 3 explains the organization of the frame buffer. Chapter 4 discusses the functions of the window manager and Chapter 5 shows how they are integrated into the frame buffer. Chapter 6 contrasts our design with related projects. Finally, Chapter 7 contains the conclusions and Chapter 8 outlines future work. 2. THE DISTRIBUTED COMPUTING SYSTEM SWITCHERLAND A possible configuration of the Switcherland system is shown in Figure 1. It consists of I/O nodes (IO) and processor/memory nodes (PM). Examples of I/O nodes

are secondary storage devices, frame buffers or video digitizers. Nodes are connected by crossbar switches (S). The gray boxes stand for workstations though their boundaries are less rigid than those of traditional systems. That is, the logical grouping of nodes can easily look different from the physical grouping. PM IO PM

S

Workstation

IO

Workstation

S

S

PM

IO

S

S

IO

Workstation

PM

PM

Figure 1: Sample configuration of the Switcherland distributed computing system. Switcherland has many characteristics which make it an ideal platform for processing multimedia data. Most importantly, it offers real-time support. In particular, nodes can be accessed at guaranteed rates and with bounded delays. Further, many applications such as video conferencing use multicast connections to transmit data from a source device to multiple sink devices. To efficiently support these applications, Switcherland implements multicast in hardware. More precisely, the switches can forward data received at one input port to multiple output ports. This saves bandwidth on significant sections of the transmission path. Switcherland uses a communication model based on a global memory. That is, all communication is implemented uniformly through load and store operations. All nodes reside in a single address space and any node can directly access any other node. Thus, it is possible to transfer data such as a video stream directly from a video digitizer to a frame buffer. Load and store operations are transferred as cells, that is, as small fixed-size packets. Each packet contains sixteen 32-bit words. The first word is a header, which contains routing information and specifies the type of operation as well as the number of valid words in the payload. The second word gives an address which is used as an offset within the node the cell is directed to and the remaining words contain the payload. Figure 2 shows a typical cell received by the frame buffer. The header defines the operation to be a store and the payload to contain fourteen valid words. The second word specifies the address a0 of the first data value d0. The remaining data values map to adjacent

addresses, that is, for value dj the address is a0 + j. Most commonly, the store operation will be directed to the pixel map in which case the data values are in fact pixel values and the address corresponds to the coordinates of a point of the pixel map. The mapping of address a onto the coordinates x and y of a pixel displayed on the screen is given by the equations x = a MOD w and y = a DIV w, whereby w stands for the width of the pixel map. Consequently, the given example will draw a line with a length of fourteen pixels. 1 word store, 14 header

a0

d0

address

payload

d1

...

d13

Figure 2: A store operation as received by the frame buffer. 3. ORGANIZATION OF THE NETWORKED FRAME BUFFER

pixel map

serial access port

network interface

random access port

As shown in Figure 3 the networked frame buffer consists of three components: a pixel map, a network interface and a display interface. The main component is the pixel map, which stores the content of the display. In other words, the display is regarded as a matrix of pixels mapped one to one into an array of memory cells. The current implementation provides a resolution of 1024 by 768 pixels each represented by a 24-bit value according to the RGB color model. Physically, the pixel map is implemented with an array of twelve 256 kByte VRAMs. To achieve high access bandwidth, the VRAMs provide two ports which can be accessed independently: a random access port connected to the network interface and a serial access port used by the display interface. The network interface is described in Section 3.1 and the display interface is explained in Section 3.2.

display interface

Figure 3: Organization of the networked frame buffer.

3.1 Network Interface The data path of the network interface is organized as a pipeline which is 32 bits wide and contains four stages. To match the transfer rate of the network link, which is 26.625 MByte/s, the pipeline is clocked at a frequency of 6.66 MHz, which corresponds to a period of 150.24 ns. The pipeline's data path and control logic

are implemented with hardwired logic. No generalpurpose processor is used since operations including the windowing operations described in Section 4 are relatively simple. Further, it seemed difficult to keep up with the data rate on the network link with a processorbased implementation. To explain how cells are processed, we use the following terminology. We say that a load or store operation translates into a request and an acknowledgment. The resulting two types of cells are identified as request and acknowledgment, respectively. Requests are sent by clients and received by servers, while acknowledgments are sent by servers and received by clients. Thus, the frame buffer only has to act as a server. It receives a request, executes the requested operation and possibly returns an acknowledgment. While a read request always returns an acknowledgment containing the read data, a write request only does so if an acknowledgment is requested for error checking purposes. To be more specific, a cell is processed as follows. First, the incoming serial bit stream is checked for errors and parallelized into words. Then, according to the kind of operation specified in the header of the cell, up to fourteen words have to be read or written. Read and write operations require an address counter, since multiple sequential words can be accessed and the operation itself specifies the start address only. Finally, the network interface has to assemble and return a cell if so requested by the request. The acknowledgment is derived from the request in that its header is reused and modified if necessary, and, in the case of a read operation, the payload is filled with the read values. Figure 4 sketches out the pipelined architecture of the network interface: white components represent combinatonial logic, and dark gray ones stand for registers. The figure shows the network receiver at the top and the network transmitter at the bottom. Between them, the pipelined logic with the address and data registers, and the pixel map are found. After the network receiver, the pipeline forks into three paths called header pipeline, address pipeline and pixel pipeline according to the three different types of words contained in a cell. Three pipeline stages can be identified. Registers L0 and C0 of the header pipeline store the length of the payload and the type of operation, respectively, both extracted from the header of a cell. The length of the payload is decremented for each clock cycle until L0 equals zero. C0 is only modified when a new header is received. The logic following these registers determines whether a read or write operation has to be performed, and whether the actual pixel value is valid, which is the case as long as L0 is not zero. The corresponding flags are stored in register RW1. RW2 is necessary to provide the flags a clock cycle later when the pixel value is stored in P0. With the address being

header pipeline

address pipeline

pixel pipeline

pixel map

network receiver -1 P0

pixel bus +1 RW1

A1

P1

address

loadable counter

RW2

serial access port

A0

C0

random access port

L0

read/write network transmitter

Figure 4: The pipelined architecture of the network interface. in register A1, the required read or write operation can then be executed on the pixel map. The address pipeline calculates the addresses of the data words in the payload. As indicated in the figure, the pipeline consists basically of a loadable counter. The additional register A0 is only necessary to adjust the pipeline length. The counter starts with the base address extracted from the second word of a cell and stored in register A0. For every processed pixel this address is incremented. The pixel pipeline consists of the two registers P0 and P1. P0 is always loaded with the word coming out of the network receiver. When RW2 specifies a write operation, the tristate buffers following P0 are enabled and the pixel value is available on the pixel bus. P1 is always loaded with the value available on the pixel bus. In case of read operations, this is the value stored in the pixel map at address A1. Otherwise, it is the value contained in P0. To keep up with the pipeline, the random access side of the pixel map must provide a transfer rate of one word per clock cycle. Since the timing for accessing the VRAM in one clock period is too critical, we organized the pixel map as a four-way interleaved memory.

3.2 Display Interface The display interface is responsible for interfacing the raster refresh buffer to a CRT display. To avoid flickering when displaying the image, CRT displays are recommended to be refreshed ideally at around 100 frames per second, whereby refreshing a frame requires accessing all pixel values of the pixel map. The display interface accesses the pixel values through the serial access ports of the VRAMs. These ports are connected to internal shift registers that hold one complete row of the VRAMs’ array of memory cells. Since the transfer of a row into the shift register only takes one access cycle, refreshing the display uses a small percentage of the available memory bandwidth. The shift registers are 8 bits wide. Therefore, we used three VRAMs in parallel to obtain 24-bit-wide pixel values. Figure 5 illustrates the organization of the display interface. To achieve the high bandwidth necessary to refresh the display at 100 frames per second, a four pixel wide data path connects the serial ports of the VRAMs with the RAMDAC. Hence, the pixel map consists of four independent banks which, when accessed in parallel, provide four adjacent pixels. Consequently, bank b contains all pixels with b = column address of its pixels MOD 4. The figure also indicates that three VRAMs are used per bank and that each of them stores one of the three RGB values. Pixels are transferred to the RAMDAC where they are converted

into analog signals at a rate of 106.25 MPixel per second. This corresponds to a monitor refresh rate of 99.4 frames per second. Bank 0: column MOD 4 =0 R

Bank 1: column MOD 4 =1 R

G

Bank 2: column MOD 4 =2 R

G B

Bank 3: column MOD 4 =3 R

G B

fied by a window identifier (wid) and an address relative to the origin of the window. In our case the origin is the address of the upper left corner of the window, in the following referred to as offset. It is obtained by indexing a translation table with the window identifier. The absolute address is then given as the vector sum of the offset and the relative address. Figure 6 illustrates the addressing scheme.

G B

B

TC 528267

TC 528267

TC 528267

TC 528267

VRAM

VRAM

VRAM

VRAM

0/0

1023 / 0 offset = f(wid)

pixel map

absolute address 24

24

24

relative address wid

24

Multiplexer

0 / 767 DAC Bt 495

1023 / 767

Figure 6: Calculating the address of a pixel.

RAMDAC R G B

Figure 5: The display interface with the VRAM array and the RAMDAC. 4. WINDOW MANAGEMENT A window management system offers the abstraction of windows in addition to a number of utilities such as polygon filling and line drawing. In most conventional systems, window management functions are implemented in software and executed by the main processor of the system or by a display coprocessor. However, in a DAN environment a partial implementation in hardware on the frame buffer is a more attractive alternative since it avoids processor intervention during transmission and display of video data and, with it, reduces delay jitter, provides shorter end-to-end delays and saves network bandwidth. When functions are partitioned between the frame buffer and the processor, a number of issues have to be taken into consideration. On the one hand, hardware is generally required to be as simple and inexpensive as possible. On the other hand, sufficient functionality must be made available in hardware to avoid processor intervention during the time when video data is actually transferred. The key functions to be implemented in hardware are windowing and clipping. The remaining window management functions can be implemented in software and executed by a processor node. Windowing offers the abstraction of windows and basically requires an address translation. Instead of addressing pixels absolutely within the pixel map, windowing allows pixels to be addressed relatively within a window. In this model, the location of a pixel is speci-

Windowing adds a third dimension in that windows can overlap. When projecting windows onto the twodimensional screen, it needs to be determined which parts are visible. Hidden parts of a window need to be clipped. We implemented this functionality with the help of a clipping mask, which stores for every address of the pixel map the identifier of the visible window. Executing a store operation now works as follows. First, the absolute address is calculated by the address translation mechanism described. Then, the absolute address indexes the clipping mask to obtain the identifier of the window visible at the corresponding location. Finally, this identifier is compared with the one given as part of the address of the store operation. If the identifiers are equal, the pixel is visible and the store operation may be executed. Otherwise, the operation is ignored. A standard window manager offers considerably more functions than windowing and clipping. For example, most applications require windows to have frames, menus, clickable items and an interface based on pointers. Moreover, an extensive library of display primitives including polygon-, raster- and line-drawing utilities is usually provided. However, this functionality is typically not invoked while video data is displayed. Therefore, its realization can be delegated to a processor node. For example, a window’s frame needs to be drawn only once when it is originally created while its content is frequently updated, as is the case when video data is displayed.

5. IMPLEMENTATION OF THE CLIPPING AND WINDOWING FUNCTIONS In this chapter, we describe how clipping and windowing are implemented. Windowing basically consists of an address translation performed in two steps. In the first step the translation table is indexed with the window identifier to determine the offset of the window. In the second step, the offset is added to the relative address resulting in the absolute address. As shown in Figure 7, windowing is implemented with the help of a translation table implemented with SRAMs, an adder and two registers storing the original address and the calculated absolute address. wid, relative address translation table

wid offset

add

(SRAMs) absolute address

Figure 7: Implementation of the windowing function. Clipping is also performed in two steps. First, the clipping mask is accessed with the absolute address obtained by the windowing function to receive the identifier of the window visible at the corresponding location. Then, this window identifier is compared with the one given by the original address. The resulting flag indicates whether the pixel is visible or not. In case the pixel is invisible, the pixel value stored in the pixel map at the given address belongs to a different window and must not be modified. The implementation, as sketched out in Figure 8, uses SRAMs, a comparator and three registers necessary for storing the window identifier, the absolute address and the resulting flag. wid clipping mask (SRAMs)

absolute address

absolute address required wid = visible

Figure 8: Implementation of the clipping function. Both functions belong to the address pipeline of the network interface. The necessary modifications to the original pipeline are shown in Figure 9. The logic needed for windowing is inserted between registers A0 and A1 while the one needed for clipping is found in the next pipeline stage. Integrating window management into the address pipeline delays the availability of VRAM addresses by one stage. Therefore, the lengths of the pixel and header pipelines had to be adjusted accordingly.

6. RELATED WORK To overcome bandwidth limitations and the lack of QoS guarantees, the Desk Area Network project at the University of Cambridge [Hayter and McAuley 1991], as well as the Medusa project at the Olivetti Research Laboratory [Glauert et al. 1994] and the VuNet project at MIT [Houh et al. 1995] use ATM switches to connect the components of a workstation. Such an approach is surely attractive since with the increasing popularity of ATM technology cheap switches are becoming widely available. However, in contrast to DANs ATM networks are mainly designed to cover wide areas and to interconnect a large number of heterogeneous systems. As a result, mechanisms need to be in place to manage relatively high rates of transmission errors and to control nodes which are not trustworthy. Therefore, protocols for ATM networks are comparatively complicated. In contrast, when designing a DAN it is legitimate to assume that nodes can be trusted and behave properly. Further, we focus specifically on a DAN with a limited diameter. Consequently, our protocols, in particular, flow control and error handling can be simpler, and more specifically tailored for applications that are run in a desk area environment. ATM was originally developed for the transmission of voice and audio data. For this kind of data transmission end-to-end application latencies up to 100 ms are acceptable. A DAN, however, also transports other types of data which in many cases require low latencies in the micro- or, even better, submicrosecond range. Most of today's ATM switches and interfaces have high latency, with a resulting end-to-end application latency similar to the one observed over more traditional networks like Ethernet [Babic et al. 1997]. Congestion in ATM networks is typically resolved by dropping cells. For applications like telephony or TV broadcasting this strategy is usually acceptable, as long as the loss of cells only leads to minor distortions. In a workstation cluster, however, the transmission of most data types has to be free of loss or errors. Dropping cells would therefore cause retransmission of the lost data and, with it, increase latency and waste bandwidth. Although the overall goals of the mentioned projects and the resulting network environments are similar to ours, the frame buffers and window management functions have been implemented in ways rather different form the one we have described here. In the VuNet project, there is no directly networked frame buffer. Rather, an Alpha workstation connected to the network is responsible for displaying video data and images. More similar to our architecture, the display system of the Medusa project is implemented as a networked node, termed as direct peripheral. These direct periph-

header pipeline

address pipeline

pixel pipeline

pixel map

network receiver -1

-1 j translation table

wid offset +1

windowing clipping mask

P0

add

KM 6865B RW1

relative address

wid

A1

P1

absolute address required wid

serial access port

A0

C0

random access port

L0

pixel bus

=

HM 628512 visible2

RW2

A2

P2 address

clipping

RW3

visible3 read/write network transmitter

Figure 9: The network interface extended with the windowing and clipping functions. erals incorporate an ARM processor responsible for interfacing the network and window management. In a similar way, the frame buffer designed for the Desk Area Network uses an ARM processor. Thus, the frame buffers of these systems are considerably more complex than ours. The Desk Area Network project also incorporates special handling of clipping. To save bandwidth on the transmission paths, clipping is performed by DAN clipping nodes close to the source of the video data stream. This can be contrasted with the approach described here where clipping is performed by the frame buffer at the destination of the video data. Although the former approach is advantageous for point-to-point connections in terms of saving bandwidth, when used by applications such as video conferencing it undermines the benefits of hardware multicast: in a video conference, each participant should be allowed to arrange video windows individually and, thus, have the windows positioned and clipped in a different way. If clipping is performed near the source, the network might have to deal with several variations of the same video stream, limiting the benefits of multicast, and possibly requiring a multiple of the bandwidth of the original stream.

7. CONCLUSION AND STATUS To implement a distributed window system that allows multiple source devices to directly send data to a frame buffer, a minimum amount of functionality typically performed by the window manager, that is, clipping and windowing, has to be provided either by the source devices or the frame buffer. Without this extra support, these functions require a processor node to perform them. The detour via a processor node is an unattractive alternative since it increases end-to-end latency and latency jitter; further, it uses additional network bandwidth and costs processor time. We have chosen to implement the windowing and clipping functions on the frame buffer rather than in the source nodes. While our approach can waste bandwidth when data is sent to refresh invisible parts of a window, it is superior when distributing data streams by multicast, a feature which can be efficiently used by applications such as videoconferencing. In such applications a video stream is multicast to several frame buffers and typically displayed in windows at different locations and with different sizes. If clipping and windowing are performed before transmission, that is, in the video source, only the needs of one frame buffer can be met.

The presented design has been implemented and a dozen fully functional systems are available and in use. The implementation uses off-the-shelf components, including field-programmable gate-arrays (FPGAs). The FPGAs implement most of the control logic and the data path of the frame buffer. The operating system Oberon System 3 [Gutknecht 1994] has been ported to Switcherland and its window manager has been redesigned to make use of the window management functions implemented on the frame buffer. The system provides an attractive user interface including video windows fed directly by remote source devices. 8. FUTURE WORK In this paper we focused on desktop video conferencing as a possible application of the platform described. However, a scalable network offering QoS guarantees and frame buffers realizing the basic window management functions are also attractive for various other applications. One of them is video walls. Video walls are high resolution displays implemented by a matrix of component screens with lower resolution. This approach is currently the only way to implement large displays with resolutions of several million pixels since current display technologies are limited in scalability by manufacturing restrictions. Synchronization problems are inherent in this application. All parts of a video window must be updated simultaneously, even if they are displayed on different component screens. In an environment that provides guaranteed bandwidths and bounded, short delays from end to end, resynchronization at the destination of the data is not necessary. Since the Switcherland project provides these guarantees at all levels of the system’s hardware and software, resynchronization is not even needed after a video stream has been manipulated by a processor node. Therefore, our approach can simplify a suitable display architecture significantly. We want to take advantage of these features and extend the existing frame buffer design to a scalable version used for interfacing video walls. In contrast to existing video walls tailored for specific applications, we want to provide a tool kit consisting of atomic building blocks which can be plugged together to form a display of any resolution, limited only by the size of the address space.

REFERENCES Babic, G.; Durresi, A.; Jain, R.; Dolske, J.; Shahpurwala, S. 1997. "ATM Switch Performance Testing Experiences." Technical Report 97-0178R1, ATM Forum, http://www.cis.ohio-state.edu/~jain/atmf/a-0178r1.htm. (Apr.). Beadle, P. 1995. "Experiments in Multipoint Multimedia Telecommunication." IEEE Multimedia, vol. 2, no. 2: 30-40. Eberle, H. 1996. "Switcherland - A Scalable Interconnection Structure for Distributed Computing." In Proceedings of 3rd Int. Conf. of the Austrian Center for Parallel Computation (Klagenfurt, Austria, Sep. 2225). In Parallel Computation, Lecture Notes in Computer Science, vol. 1127, L. Böszörményi, ed. Springer, 36-49. Eberle, H. and Oertli, E. 1996. "Flow Control in the Switcherland Interconnection Structure.", In Proceedings of the 11th Int. Conference on Systems Engineering (Las Vegas, Jul. 9-11), 146-151. Glauert, T.; Hopper, A. and Wray, S. 1994. "Networked Multimedia: The Medusa Environment." IEEE Multimedia, vol. 1, no. 4: 54-63. Gutknecht, J. 1994. "Oberon System 3: Vision of a Future Software Technology." Software- Concepts and Tools, Springer, (Feb.). Hayter, M. and McAuley, D. 1991. "The Desk Area Network." ACM Operating Systems Review, vol. 25, no. 4: 14-21. Houh, H.; Adam, J.; Ismert, M.; Lindblad, C. and Tennenhouse, D. 1995. "The VuNet Desk Area Network: Architecture, Implementation, and Experience." IEEE Journal of Selected Areas in Communications, vol. 13, no. 4: 710-721. Pratt, I. 1993. "The DAN Framestore." Technical report, ATM Document Selection 2, chapter 29, Systems Research Group Technical Note, Cambridge University Computer Laboratory, (Feb.). Taylor, K. and Tolly, K. 1995. "Desktop Videoconferencing." Data Communications, vol. 24, no. 5: 64-80.

Suggest Documents