The Brave New World of Online Digital Home ... - IEEE Xplore

0 downloads 0 Views 145KB Size Report
home. When looking at current home broadband access, most offerings are bandwidth limited to. 256 kb/s, 512 kb/s, or 1.5 Mb/s download rates, with upstream ...
BUT LAYOUT

4/21/05

10:58 AM

Page 84

ENTERTAINMENT EVERYWHERE

The Brave New World of Online Digital Home Entertainment Jason But, Thuy T.T. Nguyen, and Grenville Armitage, Swinburne University of Technology

ABSTRACT The emergence of widespread broadband home Internet connectivity is leading to a change in patterns of home user online behavior. Innovative networked applications (e.g., online multimedia and gaming) are making their mark. Will the next killer Internet applications be new forms of online digital home entertainment? Can the Internet support a widespread explosion in the use of such applications? In this article we explore potential problems in running interactive multimedia and game applications over existing Internet and home access network infrastructures. We also discuss the issues both network and application developers should consider when designing new Internet entertainment applications such that widespread usage becomes a possibility.

INTRODUCTION The recent widespread uptake of broadband access technologies has led to a shift in how the Internet is being used. The availability of an always-connected high-speed Internet connection means that home users are increasingly likely to use the Internet as an information repository and content delivery resource. Higher content access speeds coupled with zero connection time means that Internet usage can become more spontaneous rather than planned for. An always-on broadband Internet also increases the range of applications users are willing to try and adopt, beyond the current staple diet of Web surfing and email. Exposure to peer-to-peer applications (which include instant messaging and chat services) creates an interest in more advanced applications such as streaming multimedia, online gaming, and real-time telecommunications. Of particular interest is whether the current Internet architecture could support an explosion in the patronage of these services and applications. This leads to questions of what network and applications developers should keep in mind when designing these systems. We first present a brief overview of current home Internet access technologies. We detail

84

0163-6804/05/$20.00 © 2005 IEEE

some of the limitations of existing network infrastructure, in particular the limited research performed in the area of concurrent networked applications utilizing network resources. We propose the concept of an inverted capacity network design, such as may occur in a future where fiber to the home becomes a real possibility. We look at running online applications in a broadband environment, and discuss possible solutions to the problems of scalability and network congestion. Finally, we outline the issues that must be considered by both network and application designers in order to develop functional, scalable home entertainment Internet options.

CURRENT INTERNET ACCESS TECHNOLOGIES Traditional home Internet access has been provided by “56 kb/s” (and slower) analog modems that are not really suitable for modern interactive home entertainment applications. More recently, emerging broadband technologies offer bandwidths up to and beyond 10 Mb/s, which is far more suitable for content-rich multimedia and interactive applications. There are many different broadband access technologies. For consumer access the most common include digital subscriber line (DSL) services (e.g., asymmetric DSL, ADSL) and cable modems (based on existing hybrid fiber/ coax “cable TV” distribution systems). Less common are satellite Internet access, wireless LAN, third generation (3G), and fiber to the home. When looking at current home broadband access, most offerings are bandwidth limited to 256 kb/s, 512 kb/s, or 1.5 Mb/s download rates, with upstream rates typically starting at 128 kb/s. While an order of magnitude greater than analog modems, these data rates are still substantially smaller than what can be achieved with true fiber to the home. Nevertheless, although today’s broadband access rates are high enough to support most current multimedia or online gaming applications, we are always looking at how the future will impact the entire network rather then just individual links.

IEEE Communications Magazine • May 2005

BUT LAYOUT

4/21/05

10:58 AM

Page 85

LIMITATIONS OF EXISTING INTERNET ACCESS It is important to take a systems view of broadband Internet access. Instead of asking whether a particular last-mile technology can support a particular streaming video or interactive game application, we should consider how an entire neighborhood or geographical region would cope with a high concurrent traffic load. For example, we need to ask questions like: “If 50 percent of an Internet service provider’s (ISP’s) customers wish to stream X different movies concurrently, will the entire system (video servers, Internet, ISP network, and broadband link) be able to support this traffic flow?” or “If a game server has 20 active players, 16 connecting from an ISP-based home residence and four connecting from a different part of the Internet, is the overall system able to provide an adequate gaming experience for all players?” Answering these questions is non-trivial; we need to consider not only how applications interact with the network, but also how applications interact with each other on the network. Typical broadband ISP access networks are illustrated in Fig. 1, where a, b, and c illustrate DOCSIS, ADSL, and WLAN 802.11b networks, respectively. In these scenarios, home user equipment — running different network applications such as Web browsing, data and movie downloading, playing interactive online games, and voice chat — are connected to remote content/game servers via the broadband access networks of an ISP. In a DOCSIS [1] network, users’ traffic has to travel through the user’s cable modem (CM), the hybrid fiber/coax (HFC) network, and the cable modem termination system (CMTS) at the ISP site before reaching the Internet. An ISP can specify class of service (CoS)/quality of service (QoS) parameters for individual customers, ensuring service isolation even though the physical coaxial cable is shared by many customers. These parameters specify operational limits for the cable modem, for example, downstream and upstream speed limits, max-burst size, maximum number of customer premises equipment (CPEs) per cable modem, and so on. DOCSIS customers share a single downstream channel of maximum bandwidth ~27 Mb/s and an upstream channel of maximum ~3 Mb/s. ADSL access links can provide users with a maximum downstream bandwidth of up to 9 Mb/s across standard telephone service copper pairs. Upstream bandwidth of up to 1 Mb/s is supported [2]. In practice, the best speeds widely offered today are 1.5 Mb/s downstream, with upstream speeds varying between 64 and 640 kb/s. ADSL customers have a dedicated line to the central office, where the DSL access multiplexer (DSLAM) aggregates every customer’s traffic onto the ISP’s backbone. The IEEE’s 802.11b specifications [3] define the physical layer and media access control (MAC) sublayer for communications across a shared wireless LAN at up to 11 Mb/s. At the physical layer, IEEE 802.11b radios operate at 2.45 GHz and use direct sequence spread spec-

IEEE Communications Magazine • May 2005

Web server

Game server

CMTS

DSLAM

Online game

Cable modem

ADSL modem Movie download

Web surfing

Cable modem

ADSL modem

Online game

Voice chat

Cable modem

ADSL modem

Web surfing

(a)

(b) Access point

Videoconference Web surfing (c)

n Figure 1. Typical broadband access networks: a) DOCSIS network; b) ADSL network; c) 802.11b network.

trum (DSSS) transmission. At the MAC sublayer 802.11b uses carrier sense multiple access with collision avoidance (CSMA/CA). Despite their emerging popularity, the consequences of mixing interactive and noninteractive traffic over DOCSIS, ADSL or 802.11b wireless links has not been fully and experimentally explored. In previous work [4–6] we experimentally studied these access technologies in a live testbed [7]. We demonstrated that both a capped downstream link from the CMTS to a cable modem and the wireless link from an access point (AP) to customer equipment could introduce substantial delay to traffic sharing the medium along their end-toend path, despite generous available link capacity. Such additional delays (on the order of 100 ms for the DOCSIS network and 50 ms for the 802.11b network) are significant when the broadband link is part of an overall service supporting voice over IP (VoIP), interactive online game traffic, or similar delay-sensitive applications. For example, we have shown that a modest burst of traffic in the downstream direction of a DOCSIS network causes a dramatic increase in round-trip time (RTT) over the link [4]. A number of TCP sessions were run, with 8 Mbytes transferred downstream each time, and downstream/upstream bandwidths limited to 2 Mb/s and 1 Mb/s, respectively. The RTT before, during, and after each session was estimated using Internet Control Message Protocol (ICMP) echo/response exchanges between the client and the server. Figure 2 shows that an idle link has an RTT of approximately 13 ms, which increases to approximately 120

85

BUT LAYOUT

4/21/05

10:58 AM

Page 86

ms (118 ms for a maximum transmission unit, MTU, of 1500 bytes, 119 ms for a 1250-byte MTU, 123 ms for a 576-byte MTU, and 137 ms for a 512byte MTU) during each TCP transfer. Further investigations proved that the jump in RTT was most dramatic when the content source fed data to the CMTS at a rate higher than the downstream rate limit (2 Mb/s). An obvious example would be an Internet user accessing content stored on their ISP’s local con-

140 120 100 Ping time 80 (ms) 60 40 20 0

MTU 1500 After nttcp

During nttcp

Before nttcp

MTU 512 MTU 576 MTU 1250

n Figure 2. Average RTT before, during, and after TCP transactions with different MTU sizes.

4.5

TCP throughput lost Total Quake III traffic

Bandwidth (Mb/s)

4 3.5 3 2.5 2 1.5 1 0.5 0 2

3

4

5 6 7 Number of game clients

8

9

10

n Figure 3. Estimated equivalent TCP capacity consumed by Quake III traffic. TCP throughput lost — ChilDM TCP throughput lost — Odyssey TCP throughput lost — Rats3 TCP throughput lost — Xflight Total traffic — ChilDM Total traffic — Odyssey Total traffic — Rats3 Total traffic — Xflight

3.5

Bandwidth (Mb/s)

3 2.5 2 1.5 1 0.5 0 2

3

4

5

6

7

8

9 10 11 12 13 14 15 16 17 18 19 20 Number of clients

n Figure 4. Estimated equivalent TCP capacity consumed by Half-Life traffic for different map played.

86

tent cache, which itself is likely to connect to the CMTS via local 100 Mb/s LAN connections. Similar behavior has been seen in the 802.11b network with an idle link showing a 2.6 ms RTT, which then jumps to just over 120 ms at an MTU of 512 bytes, 70 ms with a 1000-byte MTU, 67 ms with a 1200-byte MTU, and 55 ms with a 1500-byte MTU, respectively [5]. Despite the relatively high downstream capacity, it is clear that potential exists for unexpected endto-end behavior. The spike in RTT over the downstream links affects all traffic that shares a particular modem in a DOCSIS/ADSL network and an AP in a wireless network. This has significant real-world implications. Consider an ISP hosting local content servers on their 100 Mb/s or 1 Gb/s backbone and encouraging their directly attached “broadband” customers to download locally rather than from distant servers. Such customers are likely to discover their RTT to other parts of the Internet jumping up dramatically during the local content transfer phase. Although probably not noticeable if the customer is not doing anything else at the time, the RTT jump will be highly disruptive if the customer site was attempting concurrent interactive VoIP or online gaming. The effects of sharing resources between different applications/clients over the 802.11b network have been studied in [6]. We have verified that low-rate non-reactive packet flows to and from one client can “steal” significant capacity from concurrent TCP flows to other clients. For example, a flow of 64-byte ping ICMP packets at 250 packets/s (~ 128 kb/s) can degrade the throughput of a concurrent TCP flow by up to 50 percent (from 4 Mb/s to ~2 Mb/s). This observed behavior is a natural consequence of the 802.11b frame transmission protocol. Another scenario that leads to unexpected performance degradation is where a number of 802.11b-enabled game clients cluster around an 802.11b hot spot, or utilize an 802.11b enterprise network as the backbone for a LAN party [5]. Figures 3 and 4 show the nominal average bandwidth consumed by the aggregate client-server and server-client traffic as a function of number of game clients (given the packet rate and average packet size). It also shows the effective capacity reduction (TCP throughput lost) caused by carrying the game traffic as a function of number of game clients (based on the number of media accesses per second that remain available to other traffic flowing through an AP). For example, while the actual bandwidth requirement of 10 Quake III players and 20 Half-Life players is less than 1 Mb/s, they would “steal” roughly 4 Mb/s and 3.5 Mb/s, respectively, of potential TCP throughput on an 802.11b network. With these experimental studies, we illuminate a number of factors broadband access network operators should consider when deploying these networks’ services to customers with heterogeneous applications and requirements. We believe that current broadband Internet access offers a tantalizing view of what is possible on the Internet, but this may be a somewhat false dawn. While patronage of these applications remains low, current Internet and ISP network configurations will be able to support the generated network traffic. However, as patron-

IEEE Communications Magazine • May 2005

BUT LAYOUT

4/21/05

10:58 AM

Page 87

age increases, we believe that the current architecture of home broadband access cannot support the levels of usage.

Edge network

Edge network

Fiber

THE INVERTED CAPACITY NETWORK Imagine if we woke up tomorrow morning to find that our home Internet connection had been replaced with a fiber-to-the-home connection offering up to 1 Gb/s full-duplex connectivity to the Internet (Fig. 5). Now consider this scenario repeated in every home across the country; we would have a network that inverts today’s consumer Internet experience (where access link capacities are substantially lower than core link capacities) [7]. Our inverted capacity extended engineering experiment (ICE3) takes this scenario and explores what would happen to the end users’ experience of network performance and application utility compared to today’s Internet experience [7].

ONLINE APPLICATIONS IN A BROADBAND ENVIRONMENT Most network problems can be traced back to a bottleneck, not necessarily the link in the network with the lowest available bandwidth, but a point where the instantaneous offered traffic load exceeds the network’s forwarding capacity at that moment. It is not always the last-mile access link that causes problems in the end user’s experience of network application service quality. Deployment of broadband access technologies in fact increases the likelihood that long-haul links, endpoint application servers, and hosts become bottlenecks in their own right. Thus, it is important to consider situations where the potential bottleneck is not the last-mile connection. The Internet currently supports real-time and interactive applications after a fashion, largely because the overall usage levels of such applications is quite low [8]. If the use of these applications exploded overnight, it is not clear that the current Internet architecture would continue to support such interactive and real-time services adequately. A key part of the problem is the fact that many content delivery services are provided from a single or small number of distributed server(s). The small number of servers increases the potential for failure due to capacity of either the server or the network [8]. A solution for this problem can be found in the caching Web proxies used today in the Internet (Fig. 6). A Web proxy acts on behalf of multiple Web clients, who all access Web content indirectly via the proxy. When a request is made: • The proxy checks its local disk cache to see if a copy of the requested content is there. • If present, the client is served with the cached copy rather than retrieving it from the source server. • If not present, the proxy retrieves the content from the original source server, and both delivers it to the client and stores it on its cache. This solution does not greatly impact Web access times. If content is not present at the proxy, the user must wait for the same object retrieval time plus a small fixed time while the

IEEE Communications Magazine • May 2005

Edge network

Fiber

Existing internet core

Edge network Fiber

Edge network

Edge network

n Figure 5. ICE3 conceptual diagram. Content server Cache/distributed server

Content owner network

Internet

Content flow Content and copyright management

ISP/local network

Client

Content management

n Figure 6. Internet cached content delivery model with content management. proxy check occurs. If the content is already present (due to recently being accessed by another user), content retrieval time is decreased and bandwidth consumption over the long-haul network is decreased. The caching process speeds up retrieval of frequently accessed content, and has a beneficial side-effect of freeing up long-haul link capacity for downloading noncached content [8]. We believe that judicious use of proxies will have a substantial impact on the Internet as broadband Internet access services reach more and more consumers. With greater raw bandwidth available to the user, the Internet is going to be used more frequently, contributing more network traffic more often. This has the effect of higher utilization of the network core and greater probability of network failure. We believe that while proxy caches may not improve apparent Internet performance when accessing cached content, the overall affect on minimizing network traffic and thus on uncached content may prove to be measurable. As broadband Internet access becomes ubiquitous and fiber to the home becomes standard

87

BUT LAYOUT

4/21/05

10:58 AM

Interactive, real-time applications are typically unable to directly benefit from caching. However, they indirectly benefit from the reduction of regular content retrieval traffic over longhaul links.

Page 88

technology, not only will the need for proxy caches increase, but the size of these caches will become important as well. Imagine an inverted capacity network design with an extremely large cache on each edge network. In this case we have a cache that is effectively one hop from the client with an RTT of under 1 ms. This implies practically instantaneous download of any cached content — the larger the cache, the greater the likelihood of content being cached. The idea of caches should be imported into the world of digital home entertainment. While an application such as large-scale video streaming could not be supported given today’s architecture, the widespread presence of video proxy caches will relieve the load on the core of the network and make the implementation of these services possible. Video content need only be delivered once to an edge network before freeing up the core to carry other traffic. Again, this implies improved apparent performance of other networked applications since more core bandwidth is made available. Interactive real-time applications are typically unable to directly benefit from caching. However, they indirectly benefit from the reduction of regular content retrieval traffic over long-haul links. If we imagine widespread usage of caches, and enough Internet content is actually cacheable, the core of the Internet will be freed up for true peer-to-peer traffic and cache updates. This will in turn result in improved performance of these applications.

CONSIDERATIONS FOR SYSTEM DESIGNERS So what does this mean for network and application designers? At the moment, many Internet applications are being designed to just function, and it appears that testing is predominantly done in a laboratory, where bandwidth is plentiful and concurrent users are minimal. The Internet community is large, and parts of the network are both complex and potentially lacking in available bandwidth. Furthermore, interactions between the many applications running on the Internet are difficult to predict and model. As such, a laboratory testbed is not reflective of real-world situations when we consider both scalability and robustness of our Internet applications. The Web has had a long period in which to evolve into a better application. In its infancy, the Web was organized much like online home entertainment applications are now: people invariably accessed the content directly from the source server. Minimal bandwidth on international trunks and costs of network access forced ISPs to consider caching Web content within their own networks. We now need to consider fast-tracking this natural evolution with other Internet applications. The primary requirements of online home entertainment options are typically either realtime, high-bandwidth, or both. At the edge of the network, these problems are minimized: • The edge of the network will typically only carry network traffic from or to directly connected hosts. There is less interacting traffic to worry about. • The edge of the network typically has more

88

available bandwidth, even with current home access technologies providing the last mile. If digital entertainment services were provided from caches and proxies at the edge of the network, many real-time and bandwidth-related issues disappear as it is relatively cheap for ISP managers to throw extra bandwidth at any problem as it emerges. As such, network designers need to consider more liberal use of proxies/caches and for more applications than just Web access. This is important even in today’s home access environment as it enables greater usage of fledgling applications such as video and audio streaming [8, 9], as well as improved performance of true peer-to-peer applications such as online gaming. Furthermore, application designers need to design their systems with caching in mind. Many home entertainment applications require proxies that do more than just store and forward data as current Web proxies do; these new proxies would need to provide some advanced functionality such as streaming in its own right. We believe that work should be performed in standardizing proxy protocols such that advanced proxy services can be enabled at the edges of the Internet. When caching of these new applications becomes widespread, they will truly become available for use by the wider Internet community [8].

COPYRIGHT ISSUES Caching of content is one way in which online home entertainment applications can grow from being a niche application to finding a global user base. However, when we consider caching, copyright protection of content becomes a primary issue [8–10]. Home entertainment applications will typically involve a great deal of content, content that takes large amounts of money to produce. Developers who spend this money are going to expect a return on their investment. In current controlled distribution environments (e.g., cinema, television) it is possible to enforce collection of monies from broadcasters who in turn generate their revenue from the users [9]. Some people argue that all content on the Internet should remain free. We observe that this argument is orthogonal to the question of whether we should provide technological means of enforcing copyrights — one of the most famous “free” software licenses, the GNU Public License (GPL), is built on the foundations of copyright law. At its core, copyright is a tool for ensuring that content distribution occurs in accordance with the content developer’s wishes. Generating returns in the Internet is a more complex proposal. There is not only the e-commerce issue of collection of monies, but also protection of content against theft. In the purely digital world of the Internet, making a perfect copy of data is not difficult; once a copy of digital content is made, there is no practical way to stop the subsequent distribution of the same content on the Internet. As such, developers of home entertainment applications must consider how to protect the content they are delivering against theft, by both determined thieves looking to make a profit and more innocent end users looking for free subsequent usage of the same content [9, 10]. This issue becomes even more difficult when

IEEE Communications Magazine • May 2005

BUT LAYOUT

4/21/05

10:58 AM

Page 89

we consider caching of content. If copyrighted digital content is cached on a myriad of caches throughout the Internet, there are many copies over which the copyright owner’s licensing and distribution constraints must be honored [9]. If we take this thought a step further, we have one of two options: • The proxy/cache must be properly secured and be able to deliver protected content. • Content is distributed in protected form, and the proxy/cache is able to deliver this content whether it is protected or not. We prefer the second approach. Application developers need to develop copyright protection schemes that function with all proxy/cache implementations. A proxy/cache implementation is guaranteed to provide delivery services for content that matches standard formats (e.g., MPEG). Copyright protection must protect the content such that the protected bitstream appears to be a valid format for existing cache services. In this way, new and improved copyright protection schemes can be implemented within an existing proxy/cache architecture. Distribution of copyright protected content must also include verification of the right to access that content. A mixed environment is envisaged where protected content can be cached and served locally, while authentication and verification can take place as a true peer-to-peer application over the Internet to a remote server (Fig. 6).

CONCLUSION Many people foresee the widespread introduction of broadband access technologies ushering in a new era of online home entertainment applications. However, while current broadband services (e.g., ADSL, cable modems, and 802.11b Wireless LANs) are undoubtedly superior to analog modems, broadband home Internet access is not yet able to support an explosion in new entertainment applications. Apart from the raw bandwidth available on widely used broadband access technologies, it is necessary to consider network usage under systemwide conditions. This means taking into account the interactions between concurrent network applications utilizing a broadband access link, an area where little experimental research has been performed. Our preliminary results indicate that concurrent applications sharing a single network link such as ADSL, DOCSIS, and 802.11b wireless LAN adversely affect each other’s performance and available bandwidth for other applications. The issue of examining the entire network for potential bottlenecks to system performance is nontrivial. An approach to minimize adverse network conditions is to extend the idea of proxy/caches to include new applications where content is cacheable. We propose that with large enough caches close to the end user, cacheable content can be delivered practically instantaneously with minimal impact on the rest of the Internet. Furthermore, the network core will be freed up to carry more true peer-to-peer traffic (e.g., online gaming) with improved QoS. A primary issue in delivering content on the Internet is copyright protection. New Internet applications will utilize increased amounts of

IEEE Communications Magazine • May 2005

content. Copyright ownership and returns on investment are the stark financial reality of much content development. Content delivery system designers must take copyright and copyright protection into account when developing future networked applications. This presents a major challenge when also considering cached content, as this inherently implies making copies of copyright protected content. Online digital networked home entertainment represents the future of the Internet. However, the challenge is for systems designers to develop systems — both networks and applications — that can support the widespread adoption of these applications by the typical home Internet user.

REFERENCES [1] CableLabs, “Data-Over-Cable Service Interface Specifications Radio Frequency Interface Specification SP RFIv1.1-I01-990311,” 1999. [2] D. Greggains, “ADSL and High Bandwidth over Copper Lines,” Int’l. J. Net. Mgmt., vol. 7, no. 5, 1997. [3] IEEE P802.11, “Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) Specifications, “Nov. 1997. [4] T. T. T. Nguyen and G. Armitage “Experimentally Derived Interactions between TCP Traffic and Service Quality over DOCSIS Cable Links,” Global Internet and Next Generation Networks Symp., IEEE GLOBECOM 2004, Nov. 2004. [5] T. T. T. Nguyen and G. Armitage, “Quantitative Assessment of IP Service Quality in 802.11b and DOCSIS zzzznetworks,” Australian Telecommun. Networks and Apps. Conf., Sydney, Australia, Dec. 2004. [6] T. T. T. Nguyen and G. Armitage, “Quantitative Assessment of IP Service Quality in 802.11b Networks,” 3rd Wksp. Internet, Telecommun. and Sig. Proc., Adelaide, Australia, Dec. 2004. [7] Centre for Advanced Internet Architectures (CAIA), Swinburne Univ., http://www.caia.swin.edu.au [8] J. But and G. Egan, “Designing a Scalable Video-onDemand System,” IEEE Int’l. Conf. Commun., Circuits and Systems, Chengdu, China, June 2002. [9] J. But, “Implementing Encrypted Streaming Video in a Distributed Server Environment,” submitted to IEEE Multimedia. [10] J. Lee et al., “A DRM Framework for Distributing Digital Contents through the Internet,” ETRI J., 25–6 (Dec.), pp. 423–36.

Online digital networked home entertainment represents the future of the Internet. However, the challenge is for systems designers to develop systems — both networks and applications — that can support the widespread adoption of these applications by the typical home Internet user.

BIOGRAPHIES J ASON B UT ([email protected]) is a research fellow at the Centre for Advanced Internet Architectures (CAIA), Swinburne University of Technology, Melbourne, Australia. He has a Ph.D. in telecommunications engineering from Monash University, Australia. His research interests include perceived performance of networked applications, copyright issues in distributed caching architectures, and network application traffic analysis. He worked as a network engineer for Comalco Smelting in 1996 and as a research fellow for Monash University from 1997 to 2003. T HUY N GUYEN ([email protected]) received a B. Eng. Dip. Prac. in telecommunications engineering from the University of Technology, Sydney, Australia, in 2002. She is now a Ph.D. candidate at the CAIA, Swinburne University of Technology. Her research interests include Internet pricing and charging systems, traffic characterization and measurement, QoS, and performance evaluation of broadband and wireless networks. GRENVILLE ARMITAGE ([email protected]) is an associate professor of telecommunications engineering and director of the CAIA at Swinburne University of Technology. His research interests include networked games, IP traffic pattern analysis, broadband IP access architectures, and network security. He has a Ph.D. in electronic engineering from the University of Melbourne, Australia. From 1994 to 1997 he was a research scientist with Bellcore’s Applied Research division (now Telcordia Technologies), and from 1997 to 2001 held research positions with Bell Labs Research, New Jersey, and Bell Labs Research Silicon Valley, California.

89