challenges for network computer games - CiteSeerX

39 downloads 10280 Views 58KB Size Report
Network computer games, internet applications, virtual communities. 1. ... and protocol developers face to support this expanding and innovative industry.
CHALLENGES FOR NETWORK COMPUTER GAMES Yusuf Pisan Faculty of Information Technology University of Technology, Sydney Australia [email protected] http://staff.it.uys.edu.au/~ypisan/

ABSTRACT Interactive entertainment and specifically computer games has grown into a multi-billion industry in the last few years and continues to grow. This expansion has fueled a number of technologies and led to a wide variety of innovations. Current graphics card have moved from becoming part of the motherboard to be independent peripherals often as expensive as the rest of the computer. With graphics now reaching almost lifelike fidelity, the industry is looking at the next set of innovations that is going to differentiate their product. Part of this innovation will be in the area of artificial intelligence with increasingly humanlike characters, but a more important component will be through advances in network games. While most games already allow the users to play against other players over the internet and some games are specifically built as multiplayer games, they only use the network in a limited manner. The number of web pages, mailing lists and chat rooms devoted to virtual communities centered on specific games is an indication of the size of these communities. The underlying network technologies often limit what types of games are possible. We examine the challenges faced by current games, describe the current workarounds commonly used in the industry and look at the challenges that game developers, users, internet service providers, internet architects and protocol developers face to support this expanding and innovative industry. KEYWORDS

Network computer games, internet applications, virtual communities

1. INTRODUCTION Interactive entertainment and specifically computer games has grown into a multi-billion industry in the last few years and continues to grow. This expansion has fueled a number of technologies and led to a wide variety of innovations. Current graphics card have moved from becoming part of the motherboard to be independent peripherals often as expensive as the rest of the computer. With graphics now reaching almost lifelike fidelity, the industry is looking at the next set of innovations that is going to differentiate their product. Part of this innovation will be in the area of artificial intelligence with increasingly humanlike characters, but a more important component will be through advances in network games. While most games already allow the users to play against other players over the internet and some games are specifically built as multiplayer games, they only use the network in a limited manner. The number of web pages, mailing lists and chat rooms devoted to virtual communities centered on specific games is an indication of the size of these communities. The underlying network technologies often limit what types of games are possible. We examine the challenges faced by current games, describe the current workarounds commonly used in the industry and look at the challenges that game developers, users, internet service providers, internet architects and protocol developers face to support this expanding and innovative industry. Computer games combine a number of cutting edge technologies in a number of fields ranging from databases to artificial intelligence to software engineering to graphics. As expected, the technologies that are most visible to the user are the ones to get exploited first. To that end, computer graphics was the first to benefit with networks and artificial intelligence being the next best candidates. While advances in artificial intelligence will continue, it is now widely accepted that while AI problems may look simple on the surface,

as they did to many in the 70s, the solutions are difficult to come by and are often constrained to

589

IADIS International Conference WWW/Internet 2004

specific domains or situations. The appearance of artificial intelligence, with careful scripting and social engineering, is much easier to implement and often leads to better results than having a full AI engine in the background. It might be useful to look at how computer graphics have changed in the last decade to anticipate the impact interactive entertainment will have once the focus of the industry shifts to network games. In 1992, Microsoft was shipping the revolutionary Windows 3.1 which could display VGA. In 1996, the first 3D hardware accelerated graphics card, nVidia NV1, was released at the same time Quake was released. While the first graphics cards did not improve performance, it led the way to 3DFX Vodoo 3D accelerator that was released in 1997, followed by GeForce 256 from nVidia in 1999. Current nVidia Quadro FX cards come with 256MB memory, has a memory bandwidth of 32GB per second and can render 133million triangles and has a fill rate of 4.5 billion texels per second making it possible to render film quality animation on personal computers. The advances in computer networks, while respectable in their own right, look slow when compared to advances in graphics cards. 56Kbps modems first became widely available in 1990s. Based on surveys by Nielsen/NetRatings while 75% of Americans have internet access in 2003, 59% of these users connect to the internet at 56Kbps or less. While US is fifth among OECD countries in terms of broadband penetration (Korea, Canada, Belgium, Denmark and Sweden having higher subscriber connectivity), it has a larger impact on network architecture and has a more prominent role in the development of computer games and interactive entertainment technologies. The market penetration of broadband in US is expected to be as high as 70% at the end of 2005 for users connected to the internet which will make high speed network connection the default rather than the exception. In Section 2, we examine the challenges that game developers face today in terms of computer networks and provide some examples of how the industry addressed, or more often worked around, these problems. Section 3 analyses the network usage of two different games. Section 4 describes some of our predictions and conclusions for advances in network technologies.

2. CURRENT PROBLEMS AND WORKAROUNDS According to GameSpy, the multi-player online role playing game Lineage had over 2 million subscribers at the end of 2002. Also in 2002, Everquest had 450,000 and Asheron’s Call had 250,000 subscribers. The numbers for concurrent online gamers (based on Friday January 2004 figures) exceeds 91,000 for CounterStrike, 9,000 for Call-of Duty and 8,000 for Battlefield 1942. The numbers clearly show the amount of interest in both multi-player online role playing games and action based computer games is significant. While popularity of games varies from year to year, the overall trend based on the number of players continues to increase. Regardless of the type of game, developers have to deal with issues of latency, bandwidth, time synchronization, protocol overheads and security when designing games. For example, the network code for games often has to be much more tamper proof than the operating system the game is running on to prevent cheating as well as preventing users from taking over another player’s computer. Computer game industry is a relatively young industry with very small long-term research and development investment. The development cycle for most games is between two to three years which leads to finding solutions as much by trial and error as adapting principled techniques from the research literature.

2.1 Latency Latency is the amount of time it takes for a packet of information to travel from one computer to another. While latency is not critical for turn based games, action based games that require accurate shooting suffer under high-latency conditions. Latency is a function of distance between computers, the hardware redirecting the packets, the type of lines that connect the two computers and the network load which might result in clashes requiring the packet to be lost. A small amount of latency cannot be avoided. Fiber optic lines which carry signal at 66% of the speed of light in vacuum, which is approximately 198km/millisecond. A server on the opposite end of the world, 20,000km away, would have a minimum latency of 101 milliseconds if the two computers were linked with a direct fiber line. Of course, choosing a closer server improves the

590

CHALLENGES FOR NETWORK COMPUTER GAMES

latency while players modem, routers, physical lines and network conditions causes high latency. For a playable game, the latency expectation is less than 100 milliseconds. When manufacturers of network hardware describe the speed of their products, they are in fact talking about capacity rather than speed. A 56Kbps modem can send at most 56Kbits each second; this is the carrying capacity of the connection. The amount of time in sending a small package and receiving and acknowledgement is more affected by latency than the bandwidth available. Even when transferring a large file, a number of control messages have to be sent and acknowledged which is the overhead independent of the file size. While a higher bandwidth can be achieved by putting together a number of 56Kbps lines, the latency remains the same. Analog modems often suffer the worst latency problems, but recent cable and ADSL modems have much reduced latency. Latency information for most products is not available. The effects of latency, even more than bandwidth have shaped the games industry. First, the game servers for action games, such as Unreal Tournament, are freely available and internet service providers are encouraged to install them on their local network. This leads to many servers being available and users having a choice on which one they connect to. For ISPs, the availability of game servers is part of their advertising and since ISPs often don’t have to pay for their internal network traffic, the cost is minimal. The game developers are freed from supporting servers for users, but at the same time they cannot charge user subscription fees, lose control over updates to servers, and consequently have little interest in supporting and improving the server software beyond the game’s initial release. Second, game developers have developed a number of techniques, such as dead reckoning also known as predictive contracts, to decrease the visible effects of latency. Third, virtual communities that form around games and game servers reflect the geographical distribution (or rather topological distribution based on network connectivity) of the user community. Dead reckoning only partially reduces the effects of latency and game developers continue to look for further techniques. Dead reckoning works by the server exposing the algorithm for non-player characters (NPCs) to the client. The client can continue to simulate the NPC based on the agreed algorithm requiring minimal corrections from the server. The distance between client simulated NPC and the server modeled NPC is checked and if it exceeds a certain threshold, state information from the server is sent to the client for correction. The simulation of NPCs requires additional CPU and memory on the client machine which may have adverse effects on performance based on client load. Dead reckoning has also been used to model human players, but has had limited success. The simplest form would be for client to assume that a player continues to move with the direction and speed which it was last observed until new state information is available. However, adventure games that involve shooting also tend to favor jerky, unpredictable movements. Peeking momentarily outside the cover of an obstacle for an observation and weaving to avoid bullets rather than moving in a straight line are just two examples of such unpredictable movements.

2.2 Bandwidth Unlike latency, bandwidth problems are much better understood and easier to address. Compressing data and only sending changes rather than a full state update are some of the common methods used. For end users, bandwidth problems can be remedied by moving to a higher capacity line if the bottleneck is due to the players connection speed or moving to a different internet service provider if the congestion is due to the service provider’s connection. Server operators can similarly update their own connection to the network, often connecting to multiple independent networks to provide fast access to users. There are a number of new techniques that are gaining in popularity worth mentioning. The peer-to-peer file sharing model has been very successful despite efforts from music and other industries to shut it down. The same principle can be applied to multiplayer online games where users download information from each other rather than directly from the server. With the server not being an intermediary this option opens up potential vulnerabilities both in terms of cheating and viruses among players, but can be prevented using strong cryptography. It is also possible to move some of the computation from the server to the clients in the same manner reducing the load on the server. Server based games often require some level of multiplexing where one-to-many communication from the server to all the clients is required. Currently this is achieved by the server maintaining a separate communication line with each of the clients. A configurable network with routers duplicating messages as

591

IADIS International Conference WWW/Internet 2004

required and clients subscribing to the information stream produced by the server would reduce both the load and the required bandwidth for the server. In Microsoft Windows, as well as many other operating systems, the defaults for network communication are embedded deep down in the registry or a configuration file and cannot be adjusted dynamically. The many tweaks that can be found on the web range from urban legends to accurate but very situation specific information. The computer often has to be rebooted to apply these changes which make applying and testing such changes tedious. For example, when using TCP protocol, the receiver must acknowledge the successful receipt by sending an acknowledgement to the sender. If the sender does not receive the acknowledgement, packet is assumed to be lost and retransmitted. For the receiver, acknowledging each individual packet as soon as it arrived would lead to bad performance, so the sender keeps transmitting up to a maximum window determined by the receiver without expecting acknowledgement. The default size for Windows is 8Kbytes is adequate for slow modems or in cases of low latency. On a high-capacity connection downloading a large file at 100 kilobytes per second would fill the receive buffer under 80 milliseconds, often less that the round-trip latency on he Internet, requiring the sender to stop sending until an acknowledgement is received. Adaptive configuration of the “window” can greatly increase transmission rates. Microsoft XBOX recently introduced online gaming and voice communication to their console. While teleconferencing style communications have been available to desktop games for some time, most desktop games have limited themselves to text based chat. This move represents the increasing network demands by network games as well as the expectation that broadband will be available in most homes.

2.3 Time Synchronization For games that use dead-reckoning, it is necessary to synchronize the clocks on all machines to avoid extraneous synchronization of objects. NTP (Mills, 1992) suffers from being overly complex and SNTP (Mills, 1996) suffers from not being sufficiently accurate. In addition, both techniques rely on UDP communication which is often blocked by firewalls. The method proposed by Simpson (2000) uses TCP for clock synchronization, but even this method suffers in high-latency conditions or when network is highly congested due to TCP packets being retransmitted. The algorithm was successfully tested on NetStorm, Islands At War, a real-time strategy game. As the above example shows, even when protocols and mechanisms exist, the practical realities often force game developers to devise alternate methods.

2.4 Protocols Most games are built around the UDP, best effort communication, protocol. Compared to TCP UDP suffers less from latency and is appropriate in situations where lost packets are not critical. UDP is a connectionless protocol, so unlike TCP there is no guarantee of data-delivery. The most common use of UDP in games is to inform each client on positions of each player where due to the high number of updates missing an update is not critical. Both UDP and TCP are affected by NAT (network address translation) that routers use. Because the client computer is not directly on the internet the router (or the ADSL modem/router combination) needs to make the appropriate translations for outgoing and incoming services. NAT can more easily deal with TCP since a connection between the client and the server via the router is established. For UDP, specific port-forwarding on expected ports needs to be configured to enable the router to redirect incoming UDP requests. Furthermore, newer modems and routers now incorporate DMZ (demilitarized zone) where all unknown traffic can be directed to a specific computer allowing user to maintain a firewall but still play computer games that need direct access to the network. The protocols used in network play have to be constructed with latency in mind from the start. For example, the popular POP3 protocol for retrieving mail from a server requests each piece of mail individually by issuing a command similar to “retr 72”. To retrieve 100 messages from the server, this command has to be issued 100 times and acknowledged 100 times, resulting in a large overhead due to latency. Since the overhead remains the same regardless of message sizes, it’s much more pronounced for smaller messages. A similar situation occurs in games where a simplistic algorithm asks for an update on each individual object. The update information is small, or possibly nil, but the overhead of asking dominates the interaction.

592

CHALLENGES FOR NETWORK COMPUTER GAMES

2.5 Security Network computer games pose a special security threat since they have to expose the player’s machine to other machines, often over the internet. Some of the security flows are inherent in the operating systems and are beyond game developers’ controls, others require a combination of cryptography techniques. The early network computer games were built on the principle of security through obscurity where developers minimized their exposure by keeping their network information hidden. Popular games were quickly reverse engineered and software that would display all the network information, rather than having it filtered by the game were quickly constructed. With the new generation of games, network security is gaining additional importance. It is no longer just movement of other players that is reported to the client, but pieces of code or pseudo-code that client executes. While the game server software is often assumed to be trusted, this assumption is also weakened by the large number of available servers with various degrees of security and control. Although it is (relatively) much easier to establish security on server based games, the increased number of users puts increased load on the servers. For users in remote locations where nearby servers with lowlatency is not available, pear-to-peer games are much more attractive. Establishing security in a peer-to-peer game where one party acts as the server is much more difficult. Cryptography algorithms, such as PGP, are of limited use since in addition to privacy and authenticity of the information transmitted; we need to force both parties to transmit the game information fully without adding or excluding any information that may benefit them. Assumptions about packet-loss or latency due to connection problems also have to be reexamined under this scenario.

2.6 Browser Games and Mobile Games Browser and mobile games represent different types of gaming experience. Browser games using Active-X, Java, Flash and similar technologies are intended to be shorter. There is often no charge or registration required to play them. For now, browser games are not seen as commercial products. Platform specific executables that can be downloaded have much higher control over the user interface and chances for expansion. Most mobile phones now incorporate a number of java based games. These games often do not have a learning curve are intended for short term use. Currently, these games are simple adaptations from the desktop with very little addition. Although there is some work in taking advantage of GPS and mobility for these games, this area is still very new.

3. CASE STUDY We examine two games below and look at how they address different network issues.

3.1 Unreal Tournament Unreal Tournament (UT) is a popular first person shooter game that can be played in single-user mode, but is most often played as a multi-player game. The basic premise is that the player has to shoot and kill other players taking advantage of different weapons, health-packs, adrenalin pills for fast actions and so on. While the gameplay for UT is nothing out of ordinary, the capabilities of its graphics engine have made UT the current leader in the field. In addition, UT allows users to extensively customize the environment to the extent that UT is used at Institute for Creative Technologies, University of Southern California to simulate virtual actors in leadership training scenarios (Gordon, 2004). UT can be grouped with Quake, Doom and Ultima Online in terms of its server-client type network architecture (Sweeney, 1999). The server communicates the game state to all the clients, client render the view of the world, and server performs “tick” to update the game state. The “tick” operation is updating all characters based on physics, initiating any game events and executing Unreal scripts. Unlike Doom and Quake, UT can handle ticks of varying length to enable variable frame rates. The game state maintained by

593

IADIS International Conference WWW/Internet 2004

the server is taken as the gold standard and all objects manipulated by clients are approximate representations. Transmitting a full game state to each client at every tick is not possible due to limited bandwidth. Instead, UT uses a set of heuristics to determine what objects are relevant to the specific client. The character controlled by the player is highly relevant whereas an actor seen more than 11 seconds ago is no longer relevant. Even with relevance filtering, the amount of information that needs to be transmitted to the client remains too high. UT uses priorities to determine which actors get more bandwidth. An actor with a priority 2.0 gets twice as many updates as an actor with a priority of 1.0. For example bots (NPCs that attack the user often controlled by AI) have a priority of 8 while decorative creatures have a priority of 2. Lag is a common problem in UT games. Pressing the arrow key forward should result in the player moving forward now and not 300 milliseconds later. UT uses a prediction scheme. When the user executes a move on the client, the actor is moved and the information is sent to the server with a time stamp. The server carries out the exact same movement incorporating the time elapsed (deltaTime) for the information to reach the server. The server than send the client a ClientAdjustPosition message. The client received the ClientAdjustPosition, discards all movement information before the time stamp of the ClientAdjustPosition message and adds any movement that happened after the specified to arrive at the current position. At any given time, the client is predicting ahead of what the server has told him by an amount of time equal to half his lag time. The client’s local movements are not lagged at all.

3.2 Age of Empires Age of Empires is a classical turn based strategy game. The popularity of the initial game has lead to Age of Kings and Age of Mythology. It is closely related in style to the Civilization series. Unlike first person shooter games, the goals of turn based games are much more modest in terms of network play. In 1996, when Age of Empires was being built the goal was to support 8 players at 15 frames per second with a 28.8Kbps network connection (Bettner and Terrano, 2001). One of the challenges in Age of Empires was that the user setting determined how long each turn took based on whether the user was watching the units or looking at open terrain. Updating just the positions of each unit would exceed the bandwidth at 250 units. The solution was to have the same simulation running on each client machine and have the clients issue commands simultaneously which would be transmitted to others. In order to prevent, start-stop style game communication messages were interleaved with animations resulting in continuous play. UDP was adopted as the communication protocol between clients, but unlike movement information in Unreal Tournament since the client cannot afford to lose packages related to unit movements, the client would have to ask for them after a certain period of time. One of the advantages of all clients acting simultaneously was that hacking the game was extremely difficult. For Age of Empires latency was only noticeable if it exceeded 500 milliseconds. Users on average issued a command every second going up to four commands a second in the heat of the battle. The level of user interaction is significantly less than UT where the user continuously moves the actor.

4. CONCLUSION The challenges that face game developers go beyond traditional network problems. Even when known solutions for a problem exist, incorporating the solution may not be feasible. The assumptions on general network usage differ significantly for network computer games where latency continues to be a problem. A careful examination of different types of games and how they use their network is required when developing new protocols. The successful games described above started with careful monitoring and understanding of the human behavior and gameplay. As computer games continue to grow, the next innovations will be coming from network technologies.

594

CHALLENGES FOR NETWORK COMPUTER GAMES

REFERENCES Bettner , P and Terrano, M., 2001. 1500 Archers on a 28.8: Network Programming in Age of Empires and Beyond, Gamasutra, downloaded from http://www.gamasutra.com/features/20010322/terrano_01.htm on 20 April 2004. Gordon, A., 2004. Authoring branching storylines for training applications. Proceedings of the Sixth International Conference on Learning Sciences (ICALT), Santa Monica, CA, USA. Nielson/NetRatings, downloaded from http://netratings.com/news.jsp on 20 April 2004. Computer & Video Games Survey, downloaded from http://www.video-games-survey.com/online_gamers.htm on 20 April 2004 Mills, D.; 1992. Network Time Protocol (Version 3) Specification, Implementation and Analysis.; University of Delaware, March 1992. RFC-1305 Mills, D., 1996. Simple Network Time Protocol (Version 4); University of Delaware, October 1996. RFC-2030 Simpson, Z. B., 2000. A Stream-based Time Synchronization Technique For Networked Computer Games, downloaded from http://www.mine-control.com/zack/timesync/timesync.html on 20 April 2004. Sweeney, T., 1999, Unreal Networking Architecture, downloaded from http://unreal.epicgames.com/Network.htm on 20 April 2004

595

Suggest Documents