PARALLEL SIMULATION OF ATM NETWORKS: CASE STUDY AND LESSONS LEARNED Carey Williamson Department of Computer Science University of Saskatchewan Email:
[email protected]
Brian Unger Xiao Zhonge Department of Computer Science University of Calgary Email: funger,
[email protected]
Abstract This paper summarizes our experiences in developing and using a cell-level ATM network simulator called ATM-TN. The ATM-TN simulator was developed as part of TeleSim, a collaborative research project aimed at developing high performance parallel simulation tools for the design and analysis of broadband ATM networks. The ATM-TN simulator provides the fundamental platform for ongoing research in two areas: parallel simulation performance (e.g., optimistic synchronization, partitioning, dynamic load balancing) and ATM network performance (e.g., traffic modeling, ATM switch design, ABR traffic control). Our experiences with the simulator to date have been largely positive. On the parallel simulation front, we have found that ATM network simulation is a promising application domain for parallel simulation techniques, though there are significant technical challenges to overcome regarding event granularity, simulation partitioning, scheduling, and load balancing. On the network performance front, we have found that detailed cell-level traffic source models, validated against empirical measurements, are invaluable inputs to ATM simulation experiments, such as those studying the statistical multiplexing behaviour of traffic in high speed networks.
Keywords: Broadband networks, parallel simulation, ATM, telecommunication networks
1. INTRODUCTION Simulation is a vital tool in the design and analysis of high speed Asynchronous Transfer Mode (ATM) networks. Simulations in this context typically have two key requirements: the need for detailed cell-level simulation, and the need for a large number of simulation events (e.g., millions to billions of ATM cells) in order to assess quality of service at an appropriate level (e.g., cell loss ratios in the range from 10?6 to 10?9). These two characteristics combine to challenge current uniprocessor computing platforms, typically producing next-day rather than same-day turnaround time for most commercial ATM network simulators on medium to large ATM network scenarios. Parallel simulation is one approach for addressing this problem. A parallel simulator, if properly designed, can exploit the inherent parallelism in a large ATM network scenario, offering significant speedup. In fact, the inherent parallelism grows with the size of the network scenario and the number of traffic flows. Of course, proper design and partitioning of the parallel simulation is crucial to obtaining this speedup. Parallel simulation of ATM networks is thus the main focus of the TeleSim project [19]. TeleSim is a collaborative research project involving three universities and six industrial sponsors, with the collective goal of developing high performance parallel simulation tools for the design and analysis of high speed ATM networks. The main simulation tool produced by the project is the ATM-TN
(Asynchronous Transfer Mode Traffic and Network) simulator [17]. This simulator is used to pursue research in two distinct areas: parallel simulation, and ATM network performance. In this paper, we comment on our experiences in developing and using both sequential and parallel versions of the ATM-TN simulator. In particular, we highlight the lessons learned (to date) from this project. For ease of presentation, we separate these lessons into project management lessons, general simulation lessons, parallel simulation lessons, and ATM network performance lessons. The remainder of this paper is organized as follows. Section 2 provides some background information on the ATM-TN TeleSim project, setting the context for the paper. Section 3 highlights the main lessons learned from our project, categorized as indicated above. Finally, Section 4 presents a summary of our observations, and identifies the challenges ahead in the future.
2. THE TELESIM PROJECT TeleSim is a collaborative research project involving researchers from three universities: the University of Calgary, the University of Saskatchewan, and Waikato University in New Zealand. The project, which began in 1994, is headquartered at the University of Calgary, where it is led by Brian Unger in the Department of Computer Science. The goal of the TeleSim project is to develop high performance parallel simulation tools for the design and analysis of telecommunications networks. The initial year of the project was funded by the CANARIE (Canadian Network for the Advancement of Research, Industry, and Education) program, and seven industrial sponsors. The ongoing project is funded by an NSERC Collaborative Research and Development (CRD) grant, and six industrial sponsors. Of particular interest to the TeleSim project (and its sponsors) are high speed ATM networks. The TeleSim project brings together researchers from the parallel simulation research community (Calgary, Waikato) and researchers from the computer networking research community (Saskatchewan). Both groups share a common interest in building efficient, accurate, high performance simulation tools for ATM networks. The network simulator constructed by the TeleSim project is called ATM-TN, which stands for Asynchronous Transfer Mode Traffic and Network model [17]. The first release of the ATM-TN software was made available (internally) to the TeleSim project participants in January 1995. This release supported only sequential execution. Subsequent releases of the ATM-TN have supported both sequential and parallel execution, on a variety of hardware platforms (e.g., SGI Power Challenge, SPARC 1000, KSR). Responsibilities in the TeleSim project were divided amongst the project participants according to the expertise available at each site. The University of Saskatchewan research group constructed the traffic models for the TeleSim project. These traffic source models include a model for variable bit rate (VBR) MPEG/JPEG compressed video [1], a model for World Wide Web client behaviour [2], a model for self-similar Ethernet LAN traffic [3], and a detailed model of the TCP/IP protocol suite [12]. The University of Alberta (an earlier participant in the TeleSim project) constructed models of ATM switches, networks, and signalling protocols. The University of Calgary and Waikato University have defined and implemented the simulation interface for ATM-TN, as well as a simulation executive to support efficient sequential and parallel simulation. The simulation interface for ATM-TN is called SimKit. SimKit is an object-oriented C ++ -based simulation language to support discrete-event simulation [11]. SimKit defines a class (sk lp) to represent logical processes (LPs), a class (sk event) to represent the messages exchanged between LPs, and a framework to transparently support both sequential and parallel execution. Parallel execution is supported by a number of kernels which include WarpKit, an optimistically synchronized kernel that is a descendant of SMTW [9], and WaiKit, a conservatively synchronized kernel developed at Waikato [5]. The kernel layer deals with the assignment of logical processes to processors, event scheduling, state saving, and other technical issues related to parallel execution. The target hardware platform for ATM-TN is an 18-processor SGI Power Challenge shared-memory multiprocessor at the University of Calgary. However, sequential execution is supported on PCs and standard Unix-based workstations, such as DEC, HP, SGI, and SPARC.
3. LESSONS LEARNED In this section, we present the main lessons that we have learned (to date) from the ATM-TN TeleSim project. For ease of presentation, we separate these lessons into project management lessons, general simulation lessons, parallel simulation lessons, and ATM network performance lessons.
3.1. Project Management Lessons
Collaboration is a key to success. The TeleSim project brought together expertise from two distinct research camps: ATM networking and parallel simulation. Together, we have been able to build a much larger simulation platform than would have been possible by either group in isolation. Furthermore, this shared simulation platform has proven to be of immense research value to both groups. In particular, it has provided the parallel simulation researchers with a new, relevant, and practical application domain on which to apply their parallel simulation techniques, and it has provided the networking researchers with a new, fast, and flexible platform on which to run ATM simulation experiments. In our opinion, the synergy between our two research camps provides an excellent example of a paradigm for collaborative research.
Industry input is invaluable. Input from the industrial sponsors has also been key to the success of the project, providing focus for our research efforts, and increasing the value and relevance of our “end product”. Regular meetings with the project sponsors, both collectively at TeleSim project workshops, and individually at the company sites, have helped to define (and refine) our research goals throughout the project. This sustained contact and interaction adds even more synergy to the project, and helps facilitate timely technology transfers, in both directions.
Think long term, not short term. A central goal from the beginning of our project was to develop a long-lived ATM-TN simulator. Because it is difficult to develop efficient parallel models, we planned from the start to carefully develop a long-lived simulation model, or rather a “simulator”. The latter term implies a parameterizable, configurable simulator that can mimic a broad range of scenarios without writing new code. This was, and still is, the best approach for producing a useful and practical simulator. We still haven’t gone as far as we’d like on this (e.g., plug-in modules), but have laid a solid foundation on which to build.
Effort spent on carefully defining interfaces saves substantial time in the long term. In our project, substantial portions of the software were developed concurrently by approximately ten graduate students at three universities. Having clear interfaces between traffic models, switch models, and the overall ATM modeling framework was vital. In the ATM-TN this includes the SimKit and ATM-MF (Modeling Framework) interfaces, and the documentation of each. These interfaces were developed early in the project by the most experienced personnel on the project, and the resulting interface specifications have helped to guide and shape all of the resulting software components. Further on the software engineering front, documentation produced during initial development was crucial despite not being kept up to date at all times.
Maintaining software is an imposing challenge. The sheer size of the ATM-TN, its multiple versions (e.g., different simulation kernels, different hardware platforms, different compilers, and different users with “customized” versions), and the constant turnover of students on the project makes maintaining software (and documentation) a challenge. CVS has been one valuable tool for software management and distribution within our project, but more rigourous software engineering practices are certainly advisable in a large project.
3.2. General Simulation Lessons
Model only what you need to model. The old adage that “the hard part of simulation is deciding what not to model” is definitely true. We have made both good and bad decisions regarding this point. On the positive side, we decided at an early stage of the project to model at the ATM cell level, for reasons of fidelity, but not to model anything below this layer (e.g., physical layer transmission), opting instead for abstract link models. While this decision may preclude us from studying new access technologies, such as wireless ATM or ADSL, it does allow us to focus our interest on higher layer ATM performance issues, such as TCP/IP over ATM, and end-to-end quality of service issues. On the negative side, one questionable decision we made was a decision to provide a detailed model of ATM UNI 3.0 signalling. While having support for both SVC (Switched Virtual Channel) and PVC (Permanent Virtual Channel) connections in our simulator is an advantage, the SVC signalling code turned out to be the largest part of the switch and network models, making it the most time-consuming to write, and the most difficult to debug, maintain, and parallelize. Furthermore, this code is now obsolete, since the ATM Forum has now standardized UNI 4.0. A substantial rewrite would be required to update and extend the signalling protocol (e.g., to support point-to-multipoint connections, or to add support for the PNNI (Private Network-Network Interface) signalling and routing protocols).
Tracking ATM standards is a never-ending battle. The rapid evolution of ATM technology and standards (e.g., ATM Traffic Management 4.0, new ATM service classes, ABR traffic control, per-VC queueing, PNNI) presents a neverending challenge for ATM simulation users and developers. While several of our industrial sponsors attend the ATM Forum, and provide us with regular updates on the Forum’s activities, we find that our “to do” list of simulator enhancements always grows faster than our ability to do them.
If a simulator is hard to use, no one will use it. A decision made early in our project was to focus on the design and implementation of the simulator itself, rather than on the user interface for the simulator. As a result, the user interface for the ATM-TN simulator is currently text-based, not graphical. Simulation users must define a number of data set files, including traffic source types and instances, ATM switch types and instances, and the links and ports defining the physical interconnections of the network being simulated. Needless to say, this simple text-based interface is cumbersome, particularly for first-time users of the ATM-TN, and particularly when defining large ATM network scenarios. This simple text-based interface has hampered the adoption of our simulator by other users, for several reasons. First, the lack of a graphical user interface (GUI), as is available on most commercial (and even non-commercial) network simulators, really hampers usability. Second, the ATM-TN simulator, when used by our industrial sponsors, is only one of the tools in a much larger toolset. Seamless integration of our simulator with other tools is the ultimate goal, though this often implies the need for vendor-specific customization of our simulator to work with proprietary tools. In the interim, we are striving for interoperabilitywith other tools, by standardizing on consistent intermediate formats for input and output data sets. One of our industrial sponsors has already integrated our simulator with their other network planning tools, thus making use of the ATM-TN simulator “in-house”, in a usable way. Ongoing efforts within the TeleSim project are addressing the usability issue. A prototype user interface for input data management has been developed, and techniques for automated generation of simulation data sets are under study. A graphical user interface for defining network and traffic models may soon follow.
3.3. Parallel Simulation Lessons
The old myth that developing parallel simulation models requires extensive experience is not a myth. Many of the traffic and network models developed in the ATM-TN simulator were developed by graduate students with little or no experience in parallel programming. As a result, significant effort was required to convert the initial sequential models into a form suitable for parallel execution (e.g. with state saving where required by WarpKit). In some cases, the structure of the initial sequential model either violated the design tenets of parallel programming, or hampered the parallel performance achievable. For example, the initial version of the Ethernet LAN traffic model used only one logical process (LP), representing both the source and the sink. The rewritten parallel version has a more natural partitioning into two logical processes, to facilitate efficient parallel execution. In other cases, the sheer complexity of the model, even in sequential form, made things difficult. For example, the ATM signalling aspects of ATM-TN were a first experience with parallel programming, and this took nearly a year to develop, debug, and correct. The old software engineering principle that you should first build a throwaway prototype, which we did, didn’t help because the prototype was built by different people than the first production version. Clearly, additional experience with and greater awareness of the requirements of parallel programming would have resulted in cleaner implementation of these models.
ATM network simulation has very low event granularity. The efficiency of a simulation depends on the granularity of events, where granularity refers to the amount of CPU time spent on “useful” work (i.e., user-level events, such as ATM cells) relative to the amount of CPU time spent on system level overhead (e.g., generating, scheduling, and processing events). A granularity of 1 (the simulation spends equal time on system overhead and user processing for each event) is barely acceptable for getting parallel speedup. The actual granularity is less than 1 for ATM-TN with conventional sharedmemory based optimistic and conservative parallel simulation kernels. Minimizing overhead per simulation event is crucial as well as reducing the total number of events (effectively increasing event processing time) by proper design of simulation models. Substantial effort has been spent in our project on both of these approaches to minimizing the performance impacts of low event granularity.
ATM network simulation is a promising application domain for parallel simulation. Experiments with medium to large size ATM network scenarios (e.g., 30-100 ATM switches, in regional or national ATM networks) show that execution time speedups of 3-5 are possible on a 16-processor shared-memory SGI PowerChallenge [18]. While such specialized computing platforms are not widely available, remote access to such platforms is possible. Furthermore, use of the ATM-TN simulator on desktop machines (e.g., 1-4 processors) still provides satisfactory performance, much faster than OpNet. An illustration of this performance potential is provided in Tables 1 and 2. In particular, Table 1 shows the relative complexity of several ATM network scenarios that we use in our parallel simulation experiments, and Table 2 shows the absolute speedup of the parallel version of the simulator (compared to execution of the sequential version of the simulator on a single processor of the same hardware platform). In these tables, the Wnet models represent a regional ATM network in western Canada, with different traffic loads and different traffic mixes. The NTN models represent the CANARIE National Test Network, a Canada-wide experimental ATM network. Table 1 lists the number of ATM switches in each scenario, as well as the number and types of traffic sources (e.g., Deterministic, Bernoulli, Ethernet, MPEG, World Wide Web, and TCP). The relative loading on the scenarios is reflected in the number of simulation events executed (see the second last column of Table 1). For example, the NTN-1,
Table 1. Summary of Wnet and NTN Benchmark Scenarios Benchmark Scenario Wnet-1 Wnet-2 Wnet-3 NTN-0 NTN-1 NTN-2 NTN-3
Determ 0 2 2 0 0 0 0
Number of Traffic Sources Bern Ether MPEG Web 0 10 2 0 2 10 8 0 2 10 8 4 0 53 30 0 0 62 269 0 0 62 269 0 0 62 269 0
TCP 0 3 3 16 24 24 24
ATM Switches 11 11 11 54 54 54 54
Num LPs 181 173 173 869 1381 1381 1381
Events (x 106 ) 22 18 18 48 73 132 216
Time (sec) 10 10 10 5 5 5 5
Table 2. Absolute Speedup for WarpKit Optimistic Parallel Simulation Kernel (SGI Power Challenge) Benchmark Scenario Wnet-1 Wnet-2 Wnet-3 NTN-0 NTN-1 NTN-2 NTN-3
Sequential Execution 1.0 1.0 1.0 1.0 1.0 1.0 1.0
1 PE 0.41 0.45 0.45 0.44 0.49 0.52 0.51
WarpKit Optimistic Parallel Execution 2 PEs 4 PEs 8 PEs 12 PEs 16 PEs 0.68 1.37 2.23 2.80 3.02 0.57 0.97 1.80 2.24 2.00 0.62 0.97 1.78 2.31 2.14 0.73 1.37 2.70 3.84 4.81 0.75 1.35 2.26 2.76 3.18 0.83 1.34 2.37 3.08 3.61 0.90 1.53 2.84 3.75 4.47
NTN-2, and NTN-3 scenarios represent light, medium, and heavy load, respectively, on the National Test Network scenario. Table 2 shows that the parallel version of the ATM-TN simulator is slower than the sequential version of the simulator, by about a factor of two, when executed on a single processing element (PE). This slowdown is due to the additional overheads associated with parallel execution (e.g., state-saving, rollbacks, GVT computation, fossil collection). However, the speedup offered by the parallel version of the simulator improves with the number of PEs, with the breakeven point (compared to the sequential simulator) typically around 3 or 4 processors. Performance continues to improve with the number of PEs for most scenarios, except those that have limited inherent parallelism (e.g., Wnet-2 has one heavily congested link that constrains the speedup achievable with parallel execution). More importantly, Table 2 also shows that the parallel simulation speedup achieved by our simulator improves with the relative size of the ATM network scenario executed (e.g., NTN versus Wnet), as well as with the relative traffic load on each scenario (e.g., NTN-1, NTN-2, NTN-3). These results reflect very favourably on the performance benefits achievable with parallel simulation, on practical and relevant ATM network scenarios.
Explore new approaches to ATM parallel simulation. Proper partitioning and load balancing are essential to parallel simulation performance, since speedup in parallel simulation depends on well balanced load among processors. Static partitioning is very difficult and it changes with the topology and traffic pattern of the network model. Dynamic load balancing is even more difficult and expensive given the pressure to reduce overhead.
The fundamental problem in getting satisfactory speedup is that while the simulation kernel is responsible for parallel execution, it has little knowledge about what parallelism exists in the application, and where. A new approach that incorporates application knowledge into the kernel promises close to linear speedup (i.e., N times faster on N processors) and makes it possible to run ATM-TN on inexpensive platforms (e.g., desktop machines with 4 processors). Preliminary experiments with this new task-based parallel simulation kernel (called TasKit) show impressive speedup results (nearly linear, up to 16 processors) on selected ATM benchmark scenarios [24]. Finding an efficient way to solve the problem of dynamic load balancing is one of the major objectives in this new approach. In this regard, TasKit appears very promising for sharedmemory multiprocessor platforms. Extending TasKit to distributed-memory multiprocessor platforms will be the subject of future research.
3.4. ATM Network Performance Lessons
Good traffic models are important in a network simulator. Substantial effort in our project went into the design, implementation, and validation of traffic models representing a wide variety of ATM traffic flows (e.g., MPEG/JPEG video traffic [1], World Wide Web [2], Ethernet data traffic [3], and TCP/IP [12]). These traffic models have increased the value of the ATM-TN simulator, particularly for our industrial sponsors. In fact, two of our sponsors have taken standalone versions of the traffic models extracted from the ATM-TN simulator, and integrated these traffic models with their own simulation tools.
There is no substitute for good traffic measurement data. One strength of our traffic modeling work has been the use of empirical measurement data in the design, construction, parameterization, and validation of the traffic source models. Good network traffic measurement data is a vital part of the workload characterization and modeling process. In some cases, we collected our own network traffic measurements [2, 20, 21]. In other cases, we relied on empirical data sets available in the published literature [10, 13]. More recently, several of our industrial sponsors have been forthcoming with network measurement data sets for us to analyze [22]. The use of realistic traffic source models based on real-world data certainly increases the relevance of network performance studies using ATM-TN.
Most network traffic is long-range dependent. There is ample evidence in the networking literature that network traffic is long-range dependent (i.e., self-similar). Self-similarity implies the presence of visually similar traffic bursts across a wide range of time scales, ranging from milliseconds to hours. Statistically, long range dependence refers to the presence of non-negligible slowly-decaying positive correlations in the traffic over long time scales. This behaviour has been observed in Ethernet LAN traffic [13, 23], wide-area network TCP/IP traffic [16], compressed video traffic [10], and World Wide Web traffic [6]. We have also seen evidence of network traffic self-similarity in our own measurements of MBONE video traffic [21] and Frame Relay traffic [22]. The presence of self-similarity is worrisome because it implies long-lasting (i.e., persistent) burst behaviour in the network, which can in turn lead to buffer ineffectiveness and a high cell loss ratio (CLR) in ATM networks. In fact, ATM performance studies using long-range dependent traffic sources can produce CLR results that differ by several orders of magnitude from those predicted by the traditional Markovian (i.e., short-range dependent) traffic source models [3]. It is for this reason that long-range dependent traffic sources are vital to have in an ATM network simulator.
Statistical multiplexing gains are possible, even for self-similar network traffic sources. Despite the worries identified previously for self-similar traffic, it is reassuring to note that statistical multiplexing gains still exist, even for self-similar traffic streams. While we are not
source 1: m1, a1 source 2: m2, a2 output link buffer: b cells allocated C Mbps
source N: mN, aN Figure 1. Network Model Used for Simulations the first to discover this (see Section V of [8], for example), we have been able to use our simulator and our access to good empirical measurement data to demonstrate this effect, and to develop a “rule of thumb” for the traffic management of self-similar traffic sources. To date, we have conducted three sets of experiments on this topic: experiments with simulated JPEG sources [15], experiments with MPEG sources [4], and experiments based on empirical Frame Relay network traffic measurements [22]. Results from all three experiments show promising statistical gains. In this paper, we present only one example of such a study, using the ATM-TN and its JPEG video traffic source model. A more detailed description of this particular experiment is available in another paper [15]. This particular experiment compares our simulation results to the theoretical results predicted by the Norros effective bandwidth formula [14]. In particular, Norros states that the effective bandwidth C for a self-similar traffic source is:
p
C = m + ((H ) ?2ln())1=H a1=(2H ) B ?(1?H )=H m1=(2H ) where m is the mean bit rate of the traffic stream (in bits/sec), a is the variance coefficient of the traffic stream (in bit-sec), H is the Hurst parameter of the stream (a dimensionless measure of long range dependence, with 0:5 H < 1), (H ) = H H (1 ? H )(1?H ) , B is the buffer size (in bits), and is the target cell loss ratio (CLR) for the traffic stream. The purpose of our experiment is to assess the accuracy of this formula for multiplexed VBR video traffic streams. The basic setup for the simulation experiment is shown in Figure 1. A set of JPEG video traffic sources feeds into an output-buffered ATM switch, and then onto a common output link. The parameters that can be set in the simulation are the buffer size and the output link capacity in the network configuration, and the mean rate, variance coefficient, and Hurst parameter of the traffic source configuration(s). Buffer size and allocated bandwidth can be set explicitly. The mean rate and variance coefficient were adjusted by changing the number of traffic sources and the relative load offered by each. The Hurst parameter was fixed at 0.8 for the video source models used in this experiment, though it can also be varied [7, 15]. Figure 2 plots effective bandwidth versus mean rate, when the source characteristics are kept constant, and the number of sources is increased from 1 to 8. Varying the mean rate this way keeps the variance coefficient and Hurst parameter constant. The Norros formula and the ATM-TN simulator agree well in this example, and both illustrate statistical gain. For example, a single JPEG source requires an effective bandwidth of 8 Mbps, while an aggregation of 8 JPEG sources requires an effective bandwidth of approximately 50
Effective Bandwidth vs. Total Mean Rate (JPEG, B=1000 cells, CLR=0.0001) 60 Norros formula ATM-TN simulation
Effective Bandwidth C (Mbps)
50
40
30
20
10
0 0
5
10
15 20 25 Total Mean Rate m (Mbps)
30
35
40
Figure 2. Comparison of Simulation Results to Norros Theoretical Results when the number of 4.8 Mbps JPEG video sources varies, thus varying the total aggregated mean rate
Mbps (i.e., 6.3 Mbps each). While the conformance between the theoretical and simulation results is much stronger here than in our MPEG and Frame Relay experiments [4, 22], all three sets of experimental results suggest positive statistical gains. Furthermore, the results suggest that the Norros effective bandwidth formula provides a good estimate of bandwidth allocations required for the effective management of self-similar traffic flows.
4. SUMMARY TeleSim is an ongoing collaborative research project with the goal of developing high performance parallel simulation tools for the design and analysis of broadband ATM networks. In our opinion, the project has been a successful one, in developing a simulation platform that can be used for ongoing research on ATM network performance and optimistic parallel simulation. Our experiences with the ATM-TN simulator to date have been very positive. On the parallel simulation front, we have found that ATM network simulation is a promising application domain for parallel simulation techniques, though there are significant technical challenges to overcome regarding event granularity, simulation partitioning, scheduling, and load balancing. On the network performance front, we have found that detailed cell-level traffic source models, validated against empirical measurements, are invaluable inputs to ATM simulation experiments, such as those studying the statistical multiplexing behaviour of self-similar traffic streams in high speed networks. We look forward to further research and development with our simulation platform as the TeleSim project evolves over the next few years, and to sharing our results with our industrial sponsors and the wider research community. We plan to make the ATM-TN simulator available [19] to other researchers, and hope that our experiences and insights can assist those pursuing similar research goals.
Acknowledgements The authors wish to thank the many people who have directly contributed to the design, implementation, and use of the ATM-TN simulator in its current form, and to its ongoing life in the TeleSim project. To name a few, these include: Martin Arlitt, Ying Chen, Zhong Chen, John Cleary, Alan Covington, Adi Damian, Mark Fox, Steve Franks, Pawel Gburzynski, Fabian Gomes, Ian Graham, Remi Gurski, Xiaoming Li, Guang Lu, Theodore Ono-Tesfaye, Alpesh Patel, Srinivasan Ramaswamy, Rob Simmonds, J.J. Tsai, and Jiayun Zhu. Financial support for the TeleSim project is provided by an NSERC Collaborative Research and Development (CRD) grant (CRD183839), and by our industrial sponsors: Newbridge Networks, Nortel, Siemens, Stentor Resource Centre Inc., Telus, and WurcNet Inc. This support, and the ongoing interactions with our industrial sponsors, are greatly appreciated, since they are vital to the success of the project.
References [1] M. Arlitt, Y. Chen, R. Gurski and C. Williamson, “Traffic Modeling in the ATM-TN TeleSim Project: Design, Implementation, and Performance Evaluation”, Proceedings of the 1995 Summer Computer Simulation Conference (SCSC’95), Ottawa, Ontario, pp. 847-851, July 1995. [2] M. Arlitt and C. Williamson, “A Synthetic Workload Model for Internet Mosaic Traffic”, Proceedings of the 1995 Summer Computer Simulation Conference (SCSC’95), Ottawa, Ontario, pp. 852-857, July 1995. [3] Y. Chen, Z. Deng and C. Williamson, “A Model for Self-Similar Ethernet LAN Traffic: Design, Implementation, and Performance Implications”, Proceedings of the 1995 Summer Computer Simulation Conference (SCSC’95), Ottawa, Ontario, pp. 831-837, July 1995. [4] B. Bashforth and C. Williamson, “Statistical Multiplexing of Self-Similar Video Streams: Simulation Study and Performance Results”, Proceedings of the Sixth International Symposium on the Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS’98), Montreal, PQ, Canada, July 1998. [5] J. Cleary and J.J. Tsai, “Conservative Parallel Simulation of ATM Networks”, Technical Report 96/6, Waikato University, Hamilton, New Zealand, 1996. [6] M. Crovella and A. Bestavros, “Self-Similarity in World Wide Web Traffic: Evidence and Possible Causes”, Proceedings of the 1996 ACM SIGMETRICS Conference, Philadelphia, PA, pp. 160-169, May 1996. [7] Z. Deng, “Modeling and Analysis of Self-Similar Video Traffic”, M.Sc. Thesis, Department of Computer Science, University of Saskatchewan, Saskatoon, SK, June 1996. [8] A. Erramilli, O. Narayan, and W. Willinger, “Experimental Queueing Analysis with LongRange Dependent Packet Traffic”, IEEE/ACM Transactions on Networking, Vol. 4, No. 2, pp. 209-223, April 1996. [9] R. Fujimoto, “Time Warp on a Shared-Memory Multiprocessor”, Proceedings of the 1989 International Conference on Parallel Processing, pp. 242-249, July 1989. [10] M. Garrett and W. Willinger, “Analysis, Modeling and Generation of Self-Similar VBR Video Traffic”, Proceedings of ACM SIGCOMM ’94, London, UK, pp. 269-280, August 1994. [11] F. Gomes, S. Franks, B. Unger, Z. Xiao, J. Cleary, and A. Covington, “SimKit: A High Performance Logical Process Simulation Class Library in C ++ ”, Proceedings of the 1995 Winter Simulation Conference, Arlington, VA, December 1995.
[12] R. Gurski and C. Williamson, “TCP over ATM: Simulation Model and Performance Results”, Proceedings of the 1996 IEEE International Phoenix Conference on Computers and Communications (IPCCC’96), Phoenix, AZ, pp. 328-335, March 1996. [13] W. Leland, M. Taqqu, W. Willinger, and D. Wilson, “On the Self-Similar Nature of Ethernet Traffic (Extended Version)”, IEEE/ACM Transactions on Networking, Vol. 2, No. 1, pp. 1-15, February 1994. [14] I. Norros, “On the Use of Fractional Brownian Motion in the Theory of Connectionless Networks”, IEEE Journal on Selected Areas in Communications, Vol. 13, No. 6, pp. 953-962, August 1995. [15] A. Patel and C. Williamson, “Effective Bandwidth of Self-Similar Traffic Sources: Theoretical and Simulation Results”, Proceedings of the IASTED Conference on Applied Modeling and Simulation (AMS’97), Banff, AB, pp. 298-302, July 1997. [16] V. Paxson and S. Floyd, “Wide Area Traffic: The Failure of Poisson Modeling”, Proceedings of 1994 ACM SIGCOMM Conference, pp. 257-268, August 1994. [17] B. Unger, F. Gomes, X. Zhonge, P. Gburzynski, T. Ono-Tesfaye, S. Ramaswamy, C. Williamson and A. Covington, “A High Fidelity ATM Traffic and Network Simulator”. Proceedings of the 1995 Winter Simulation Conference (WSC’95), Arlington, VA, December 1995. [18] B. Unger, X. Zhonge, J. Cleary, J. Tsai, and C. Williamson, “Parallel Shared-Memory Simulator Performance on Large ATM Network Scenarios”, submitted for publication, 1998. [19] TeleSim Project, URL http://www.wnet/ca/telesim/ [20] R. van Melle, C. Williamson, and T. Harrison, “Diagnosing a TCP/ATM Performance Problem: A Case Study”, Proceedings of IEEE GLOBECOM’97, Phoenix, AZ, pp. 1825-1831, November 1997. [21] R. van Melle, C. Williamson, and T. Harrison, “Network Traffic Measurements of MBONE over ATM”, Proceedings of the Workshop on Workload Characterization in High Performance Computing Environments, Montreal, PQ, Canada, July 1998. [22] C. Williamson, and F.M. Foo, “Network Traffic Measurements of IP/FrameRelay/ATM”, Proceedings of the Workshop on Workload Characterization in High Performance Computing Environments, Montreal, PQ, Canada, July 1998. [23] W. Willinger, M. Taqqu, R. Sherman, and D. Wilson, “Self-similarity Through High-Variability: Statistical Analysis of Ethernet LAN Traffic at the Source Level”, Proceedings of 1995 ACM SIGCOMM Conference, Cambridge, MA, September 1995. [24] X. Zhonge, B. Unger, and J. Cleary, “TasKit: High Performance Task-Based Network Simulation”, TeleSim Project Internal Report, February 1998.