Abnormal Web Usage Control by Proxy Strategies - Semantic Scholar

12 downloads 548 Views 236KB Size Report
Keywords: WWW traffic monitoring, access control, proxy server, network .... and then offers feedback to alter the proxy server's status to control Web usage, as.
Abnormal Web Usage Control by Proxy Strategies Hsiang-Fu Yu, Li-Ming Tseng E-mail: [email protected], [email protected] Distributed System Lab., Department of Computer Science and Information Engineering, National Central University, Chung-Li, 32054, Taiwan Keywords: WWW traffic monitoring, access control, proxy server, network management

Abstract The World Wide Web has become an extremely popular information service. Large HTTP packets result in network congestion. Proxy cache servers are widely deployed on the Internet to overcome this obstacle. However, the approach yields an undesirable phenomenon - a small set of users misuse proxy servers to mirror the entire contents of Web sites. This behavior wastes network resources; increases WWW servers’ loads; increases users’ waiting time, and violates copyrights. Approaches to designing a proxy server with WWW usage control and to making the proxy server effective on local area networks are proposed to prevent such abnormal WWW access and to prioritize WWW usage. Finally, we implement a system, ProxyBreaker, to demonstrate the approaches. The implementation reveals that the proposed approaches are effective, such that the abnormal Web access does not reoccur.

Introduction The WWW has become the most popular offering on the Internet because of its ease of use and universal access. In fact, in a study of backbone usage (Thompson, 1997), HTTP (Berners-Lee, 1996; Fielding, 1999) packets comprised two-thirds of all activity, which causes serious network congestion. Proxy cache servers are widely utilized to overcome this obstacle. A proxy server has several significant advantages. First, it reduces waiting time, because

remote objects are cached in the local proxy server. Second, it facilitates more efficient use of bandwidth by caching objects that can then be repeatedly used. Third, it shares the load of popular WWW sites, since it acts as a partial reduplication site for the WWW servers. These above advantages, and the lack of WAN bandwidth have encouraged the users of the Taiwan Academic Network (TANet) to use proxy servers. In fact, almost all of TANet’s HTTP overseas traffic passes through the proxy cache servers. A particular event motivated the construction of a proxy server to control abnormal Web usage. In August 1998, a librarian of the National Central University (NCU) received a complaint from the Association of American Physics, stating that a user at NCU downloaded many papers (roughly 900 Mbytes) from their Web site via an NCU’s proxy server in a single night. Tracing the access log showed that an impatient graduate student applied the teleport (Tennyson, 2001) to download the entire WWW site. Similar cases also occurred at other universities in Taiwan. In fact, in December 1999, another national university was banned from accessing the digital library operated by a well-known electronic publisher. Such inappropriate mirroring causes several negative impacts on the Internet. 

The load on WWW servers is increased.



A few users consume most of the WAN bandwidth. The remaining users’ waiting time is increased.



Service providers probably lose their customers owing to network congestion or server overload.



Mirroring may violate copyright.



Even worse, if a WWW server rejects the access of proxy servers to prevent mirroring, then all proxy users will be denied.

Normally, when users do not use a proxy server, WWW servers can recognize abnormal users and directly forbid their accesses. Unfortunately, when HTTP requests are served via a proxy server, the WWW servers can recognize only that server, and not the real users behind the proxy. Restated, proxy servers hide abnormal access. Approaches to designing a proxy server with WWW usage control and to making the 2

proxy server effective on local area networks were proposed to prevent such abnormal WWW access and to prioritize WWW usage. Finally, a system, ProxyBreaker, was implemented to illustrate the approaches. The rest of this paper is organized as follows. Section 2 addresses the design issues and deployment of the system. Section 3 elucidates ProxyBreaker’s implementation. Section 4 briefly describes related work. Finally, we summarize conclusions.

Web traffic control by a proxy This section discusses the design issues and deployment of the proxy server with traffic control.

Proxy design with usage control Many proxy servers (Cacheflow, 2001; Cisco, 2000; Squid, 2001) are not equipped with Web usage control functions. The addition of such functions is discussed here.

Source code modification versus external functional module We can add new functions of a proxy server in two ways - modifying source codes and adding external modules. The former performs better and satisfies most requirements. However, the source code may not be available. Moreover, programmers must take time to read the code. The time required is particularly long if the code was written in a language unfamiliar to the programmers. The last restriction is conformance. The code must be updated when the original proxy software issues a new version. Maintaining codes is time-consuming. Adding external modules also strengthens the proxy. This method has several advantages. First, no source code is needed. Without the public source code, adding external modules is the only way to improve the original system. Second, the module can be coded in any appropriate language, regardless of the language originally used to program the server. Third, the method provides better conformance, because a proxy server is considered to be a black box, and its input and output rarely change. Accordingly, the external module can be easily integrated with the proxy server. Updating the module is unnecessary when a newer version of the proxy is 3

Figure 1 Approaches to integrating an external module with a proxy Web page

Processed

Input URL

Proxy server

URL

External module

Output page

(a) Integrate an external module with a proxy server by URL hijacking. Input/new URL

Input

Input URL

URL

Proxy server

External module Output page

(b) Integrate an external module with a proxy server through URL redirection. Processed page Original page Input URL

External module

Proxy server

Output page

(c) Integrate an external module with a proxy server via page redirection. Feedback

Input URL Log

Proxy server

External module

Output page

(d) Integrate an external module with a proxy server by log analysis. issued. However, the main drawback of adding external modules is that the system’s 4

performance and functionalities are restricted by the external integration of such modules.

Issues regarding the external functional module This subsection introduces four approaches to integrating an external module with a proxy server. 

Hijack URLs. Figure 1(a) depicts the complete procedure. The external module is placed in front of the proxy server to hijack the input URL. The hijacked URL is processed and redirected to the proxy server. Then, the server fetches the Web page and returns it to the external module. Finally, the external module forwards the page to the client. In this approach, the module checks whether the input URL should be forwarded to the proxy. Additionally, the module examines the page, returned from the proxy, to filter out any inappropriate content, including adult or violent material, or replaces the page with a warning if a user abuses network resources.



Redirect URLs. Many proxies (Cacheflow, 2001; Cisco, 2000; Wessels, 2001) offer a redirection interface for plugging in an external program and processing input URLs, as depicted in Figure 1(b). The module determines whether the input URL should be replaced with a new URL, using an access control list. If so, the HTTP connection is directed to a new URL, rather than the original one. The difference between URL hijacking and URL redirection is that the former approach intercepts the requests outside the proxy server. The proxy does not recognize that its request is hijacked. In contrast, in the latter approach, the proxy actively redirects the requests to the module. URL hijacking can check a page’s contents but URL redirection cannot.



Redirect pages. Some proxy servers (Danzig, 1998) allow page redirection. They redirect original pages to an external program. Then, the program returns new or original pages to the proxy servers. Finally, the servers transmit the pages to users. Figure 1(c) shows the complete process. The difference between page redirection and URL redirection is that the former approach checks a page’s contents and if necessary, replaces the page. The latter approach, however, merely examines the 5

page’s URL. 

Analyze log. The external module analyzes access logs generated by proxy servers, and then offers feedback to alter the proxy server’s status to control Web usage, as depicted in Figure 1(d). Furthermore, this approach can be combined with the above approaches. Simple combination involves adding the log-analyzing function to the external module of the previous approaches. The module changes the proxy server’s operation according to the results of the log-analysis.

Additional issues This subsection addresses other important requirements of a proxy with usage control.

Real time control Timely detection of abnormal usage and a quick response are essential. However, such a requirement cannot easily be met, since a proxy server must process massive requests. Suppose the log analysis method is adopted to activate the monitor system. A straightforward method for meeting the real-time requirement is to analyze the access log immediately after the proxy receives a new request. However, this approach leads to busier disk I/O and worse performance degradation. Alternatively, analyzing the log in batches reduces the negative impact of the monitor overhead on the proxy server. For example, suppose that a proxy server can download Web pages at 100 Kbytes/sec and that the maximum allowed Web usage is 100 Mbytes/hour. Let the overuse margin be 1 Mbytes. The system is guaranteed to detect the overuse before the user’s usage exceeds 101 (100 + the overuse margin) Mbytes per hour. The system must measure the usage every 10 (1000/100) seconds, to achieve the goal. That is, according to the overuse margin, the system determines an appropriate period for analyzing the log, rather than monitoring incoming requests whenever they are received.

Self adaptation The system must be adapted to changes in network environments to further reduce the monitor overhead. For example, traffic on TANet starts increasing in the morning, and reaches 6

a peak in the afternoon. It then declines and reaches a minimum at nearly 5:00 a.m. In this case, a self-adaptive system should automatically increase its monitoring interval at 10:00 a.m. when the network is busier and the transmission speed is lower than at other times. This adaptation prevents the monitor overhead from seriously degrading the proxy’s performance. However, the system must adopt a shorter monitoring period when the network is faster (e.g. 5:00 a.m.) to ensure that abnormal access is not missed.

Prioritized service Prioritized service requires a monitoring system that offers flexible management policies for different Web servers and clients. For example, electronic publication sites normally dislike readers to download a large amount of data in a short period. Hence, they probably require that usage does not exceed an upper bound, such as 50 Mbytes/24 hours. In contrast, commercial Web servers seek access by as many users as possible, because their advertising revenue is proportional to access times. Usage by their clients must not be restricted. Similarly, clients have different requirements. For example, a graduate student probably needs larger usage than an undergraduate student. Client-server or server-client usage control is sometimes required. For example, a company may require that the use by research developers of an electronic publication site is uncontrolled, but their access to commercial Web servers is limited.

Proxy deployment The strategy offered earlier primarily employs proxy servers to control HTTP traffic. However, a proxy server does not act as a router or gateway and HTTP packets do not always pass through it; reducing the strategy’s effectiveness. Accordingly, two approaches to capturing as many Web requests as possible are proposed.

Transparent proxy A transparent proxy (Cohen, 1999; Danzig, 1998; Heddaya, 1998; Krishnan, 1999; Legedza, 1998; Rodriguez, 2000; Williams, 1998) does not require users to configure their browsers for use with proxy servers. That is, all requests are automatically redirected to the 7

proxy servers, regardless of the browser setting. Two mechanisms activate a transparent proxy. 

Redirected packets. HTTP requests are redirected to a proxy server via either a layer-4 switch or a policy-based router. If a switch is used, it diverts packets, destined for port 80, to a proxy server. Conventionally, for fault tolerance, this proxy server is a cluster that consists of several proxy servers. These servers are directly connected to the switch and receive all client traffic. Moreover, the proxy server runs in ‘promiscuous’ mode, such that it accepts all connection requests regardless of the destination address. When a TCP SYN (Williams, 1998) packet, destined for port 80, is received, it is redirected to the proxy server by the switch. Neither IP nor TCP header information is altered. Next, the approach replaces the source IP address of the returned packets with the original server IP address, to establish a connection between the client and the proxy server, even though the client believes that it is connected to the original server. Once the connection is established, the proxy server receives the GET packet, and then completes the URL request, as usual. The basic mechanism is identical if a policy-based router is used in constructing a transparent proxy. The router redirects packets from port 80 to the proxy server but non-HTTP traffic is routed to the Internet. More information about transparent proxy formed by packet redirection can see the studies (Legedza, 1998; Williams, 1998).



Intercepted packets. Packet interception hijacks clients’ requests and forwards them to a proxy server. Conventionally, the hijacking program must be placed at focal points of the network, to ensure that IP packets of an intercepted TCP connection are captured. When this program listens to the first TCP SYN packet of a connection, it stores information about the connection, such as the IP address and TCP port number of both the client and the original server. The address of the destination server is then replaced with that of the proxy server, and the packet is returned to the network. All subsequent packets derived from the same client and TCP port number are also forwarded to the proxy. On the reverse path, the interception program also receives both the ACK SYN packets and the subsequent packets, which return from the proxy 8

to the client. The source IP address and port number of these packets are then altered to the original server’s IP address and port number so that the client believes that it is connected to the original server. More information about transparent proxy accomplished by packet interception can see the researches (Cohen, 1999; Danzig, 1998; Heddaya, 1998; Krishnan, 1999; Rodriguez, 2000). Table I compares two mechanisms. Table I Comparison of packet redirection and packet interception Advantages

 

Disadvantages 



Packet redirection All HTTP requests are  acknowledged.  The proxy server can be deployed anywhere. The layer 4 switch or router  requires reconfiguration, causing difficulty for the system on-line/  off-line. Access is slower.

Packet interception No router is required. Normal access speed maintained.

is

Packets are probably lost when the network is congested. The interception program must be deployed at focal points in the network.

Nontransparent proxy This approach requires users manually to program their browsers to use proxy servers. The problems are as follows. 

Users do not know how to use proxy servers.



Users prefer not to use proxy servers.

Documents relating to the use of proxy servers, such as FAQ, can be posted on well-known portal Web sites or news boards, to promote the use of the servers. Training courses also help users take advantage of using proxy servers. A proxy with fast Web retrieval can be employed to attract the attention of users. If such a proxy could be developed, then users originally opposed to the use of a proxy may begin to use it. A policy-based router can be configured to dedicate sufficient bandwidth to a proxy server, thereby accelerating it. The TANet adopts this method and most of its users utilize proxy servers to retrieve Web pages. The TANet backbone routers are configured to reserve fixed broad bandwidth (approximately 64Mbps to a total of 155Mbps) for a cluster of proxy servers at regional network centers in Taiwan. Users of these proxy servers enjoy far better WWW 9

Figure 2 A proxy server with high-speed links attracts clients’ attention Proxy server High-speed link WAN Regular link

LAN

Clients service because of this reserved bandwidth. Figure 2 illustrates an alternative method for use when a policy-based router is unavailable. In this method, the proxy server is equipped with a private high-speed link. It also guarantees sufficient bandwidth for use by the proxy. The high-speed link can be an ADSL (Tanenbaum, 1996), because it is cheap and fast. Currently, many universities in Taiwan accelerate their WWW service in this manner. If users still decline to take advantage of proxy servers and insist on directly retrieving WWW servers, then the Web servers themselves must implement appropriate mechanisms to block abnormal access. Several studies (Abdelzaher, 1999; Chandra, 2000; Chen, 1999) explore this area. Finally, we introduce an approach to enforcing proxy use in a nontransparent strategy. This scheme is similar that of the transparent proxy except in that all packets, destined to port 80, are redirected to a WWW server, rather than a proxy. This WWW server stores documents concerning proxy servers; hence users can conveniently learn how to use the servers. This scheme does not hijack the packets to proxy servers so users can still retrieve Web pages via the servers.

Implementation A system, ProxyBreaker, was realized to demonstrate the approaches. 10

Figure 3 Architecture of ProxyBreaker Feedback

Input URL

Proxy server

Controller Log

Extractor

Output

Viewer

Intermediate files

ProxyBreaker

ProxyBreaker’s design A proxy application, SQUID (Tseng, 1999; Wessels, 2001), was selected for enhancement to support Web usage control, because SQUID is popular on the Internet and has been adopted by numerous universities and institutions. An external module, ProxyBreaker, was developed to achieve Web usage control owing to the frequent version updating of SQUID. This approach avoided the need to maintain codes for the various SQUID versions. The log analysis method was used to integrate the module described here with SQUID.

ProxyBreaker’s architecture Figure 3 depicts the system’s architecture. The ProxyBreaker is comprised of three components - an extractor, a controller and a viewer. The extractor collects usage information from the access log of the proxy server. The extracted information is saved in intermediate files. The controller then analyzes the files and reconfigures the proxy server accordingly. The viewer provides a user interface for browsing the usage information.

The extractor The SQUID’s log (Wessels, 2001) is a plain text file composed of lines separated by newline characters. Each line records a user’s access. Typically, the log size for a busy proxy server is very large (nearly 1 Gbytes daily at NCU) such that the extractor merely collects necessary data from the log and stores it in intermediate files for further processing, reducing processing time and occupied disk space. The system selects four fields from the log - time, 11

client address, bytes and peerstatus/peerhost. The time field records the time at which the request was made. The client IP address field specifies the user who issued the request. The bytes field states the size of the retrieved page. The peerstatus/peerhost field consists of two subfields. The first is the cache status that indicates whether the cache object was hit. The second records the WWW server’s address. The subfield contains the proxy’s address if the cache was hit. Otherwise, the subfield stores the WWW server’s address. The ProxyBreaker does not include the hit retrieval in the user’s Web usage. The extractor employs two hash tables to store usage information. One saves client usage and the other saves Web server usage. The key to the first table is the string that cascades the client’s address and the Web server’s address, and then we can aggregate the usage of the same client-server pair. Similarly, the key to the second table is the string that cascades the Web server’s address and the client’s address. We can also aggregate the usage of the same server-client pair. The extractor writes the hash tables into the intermediate files when the log analysis is complete. The extractor periodically analyzes the access log and generates the intermediate files. As mentioned in the preceding section, the overuse margin determines the analysis period. The extractor calculates the current average transmission rate as it generates the intermediate files. The margin divided by the average transmission rate equals the subsequent analysis period. For example, suppose the margin equals 2 Mbytes and the current average transmission rate equals 20 Kbytes/sec: the next analysis will be performed after 100 (2000/20) seconds. The period has a lower bound to avoid serious performance degradation caused by extremely frequent log analysis and SQUID reconfiguration.

The controller The controller is responsible for checking Web usage according to usage control lists. Initially, the controller loads two configuration files to obtain the usage control list. The first file includes four fields and specifies client-to-server usage limits. The first field is the client address range; the second is the address range of the WWW server; the third is the time and the 12

Figure 4 ProxyBreaker’s implementation ProxyBreaker Proxy servers

Utilities

Operation system Operation system: FreeBSD Proxy servers: SQUID Utilities: mail, Apache WWW server last is the maximum usage at this time. The second configuration file stores server-to-client usage limits, and its format is similar to that of the first configuration file, but with the order of its first two fields switched. After obtaining the usage control list, the controller loads the intermediate files and aggregates the total usage of each client-server/server-client pair. The controller then generates the access control list accordingly and reconfigures SQUID to render the new setup effective.

The viewer The viewer is designed using CGI programs and is used to read the intermediate file to reveal usage information. The viewer supports HTTP for ease of use and portability. The administrator and users can retrieve their usage information through browsers.

ProxyBreaker’s realization The ProxyBreaker was coded in Perl language and was operated on a personal computer with a Pentium II 400 processor, and two 10/100 Mbps net cards. The operating system was FreeBSD (FreeBSD, 2001). Figure 4 depicts the system’s implementation. The proxy software employed SQUID 2.3-STABLE3. The WWW server was the Apache 1.36 (Apache, 2001). Figure 5 shows ProxyBreaker’s home page. The page has three hyperlinks used to display usage information. The first gives abnormal usage. The second displays regular users’ usage. The third returns the usage by department at NCU. Figure 6 displays the interface for querying user’s usage. The users input their IP addresses and server’s IP addresses; the system then 13

Figure 5 ProxkBreaker’s home page

List overuse information

List regular usage

List usage by department at NCU

outputs their monthly usage. ProxyBreaker was installed on the proxy server at NCU library to prevent abnormal access to electronic journals. Figure 7 presents the proxy deployment at NCU, where the nontransparent proxy approach was employed. The clients could connect to Web servers, either directly or via the proxy servers, but connecting through the proxies is faster. Empirical results indicated that about 85% of users at NCU utilized the proxy servers. The thick lines represent bandwidth dedicated to the proxy servers. The thick, dotted line shows the bandwidth for other

Figure 6 Page used to query users’ usage

Input Web servers’ and clients’ addresses

Sort by servers’ or clients’ addresses 14

Set up the number of output records

Select a month

protocols, such as SMTP and FTP. The general-purpose proxy serves most HTTP requests but those for electronic journals are redirected to the proxy server with ProxyBreaker, which is configured only to accept connections to electronic journals. Accordingly, clients can connect to electronic publication sites through these proxy servers, but their use remains controlled.

Figure 7 The proxy deployment at NCU The general-purpose proxy server

The proxy server with ProxyBreaker

WAN NCU campus network

Clients

Related work Some universities (NCTU, 2001; NSYSU, 2001; NTU, 2001) on TANet have also adopted the log analysis method to enhance their proxy servers to control Web usage. However, their servers fail to support real-time control. Their systems measure WWW usage every 24 hours. As a result, a situation as described previously, can easily occur. Furthermore, those systems do not prioritize Web service among different WWW servers and clients. Several management tools for SQUID are now introduced. The Multi-Router Traffic Grapher (MRTG) (Oetiker, 2001) is an application that monitors network traffic loads. It generates Web pages that include GIF images which visually represent the network traffic. MRTG is not limited to monitoring network status. Rather, it can monitor any SNMP MIB (Perkins, 1997) variable. In fact, using data generated by external programs, it can be applied to monitor system loads, login sessions and much else. Calamaris (Beermann, 2000) is a program that measures SQUID access and displays empirical results in HTML. It offers detailed information with which managers can tune their 15

proxy servers. Similar tools include Anemone (ELC, 2001), pwebstats (Gleeson, 2000), PY_Squid_Stats (Ramahefason, 2000), WebLog (Mark, 2000) and Squeezer (Kozinski, 2000). Finally, two SQUID page filters, SquidGuard (Baltzersen, 2000) and Squirm (Foote, 2000), are described. They are installed on the proxy servers to protect children from adult or violent contents. The functions of the filters are similar. They can both reject requests for some particular WWW servers, and can check whether incoming URLs contain a specific string. The URLs are blocked if the string is found. According to Tseng’s study (Tseng, 1999), SquidGuard performs far better than Squirm. Table II compares these tools. Access control indicates that a system can statically reject a request from a user, or to a server, according to a specified access control list. A system with usage control can dynamically monitor access in response to the generated traffic. The URL filter indicates that a system blocks a request if the requested URL contains a specific string. Table II SQUID tool comparison ProxyBreaker V V

Access control Usage control URL filter Proxy status report V Method for integrating external Log analysis modules * Calamaris offers more complete information.

Calamaris

V* Log analysis

SquidGuard V V V URL redirection

Conclusions Proxy servers are widely deployed on the Internet to accelerate Web retrieval and improve bandwidth utilization. Unfortunately, some users use them to mirror entire Web sites, regardless of their needs. This behavior wastes network bandwidth, increases WWW server loads, increases other user’s waiting times, and violates copyrights. This research describes a solution for this problem. The solution inhibits abnormal Web usage and prioritizes normal Web use. The presented approaches state how to add Web usage control to current proxy servers, and how to deploy effectively the proxy servers. Finally, ProxyBreaker was implemented to demonstrate the effectiveness of the proposed approaches. Results in this study verified that the 16

solution was effective and abnormal access has not yet reoccurred.

Acknowledgment The authors would like to thank the National Science Council of the Republic of China for financially supporting this research under Contract No. NSC 90-2213-E-008-045.

References Abdelzaher, T.F. and Bhatti, N. (1999), “Web server QoS management by adaptive content delivery”, IWQoS '99: 1999 Seventh International Workshop on Quality of Service, pp.216–225. Apache development group (2001), http://www.apache.org. Baltzersen, P. (2000), http://www.squidguard.org/ Chandra, S., Ellis, C.S. and Vahdat, A. (2000), “Differentiated multimedia Web services using quality aware transcoding”, INFOCOM 2000, Proceedings of Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Vol. 2, pp. 961–969. Beermann, C. (2000), http://cord.de/tools/squid/calamaris/ Berners-Lee, T., Fielding, R. and Frystyk, H. (1996), “HyperText Transfer Protocol HTTP/1.0”, RFC 1945, IETF HTTP WG, May. Cacheflow Inc. (2001), Cacheflow, http://www.cacheflow.com. Chen, X. and Mohapatra, P. (1999), “Providing Differentiated Service from an Internet Server”, in Proceedings of Eight International Conference on Computer Communications and Networks, pp. 214–217. Cisco

Systems

(2000),

Cisco

cache

engine,

http://www.cisco.com/warp/public/751/cache/cds_ov.htm. Cohen, A., Rangarajan, S. and Singh, N. (1999), “Supporting Transparent Caching with Standard Proxy Caches”, The 4th International Web Caching Workshop. Danzig, P. (1998), “NetCache architecture and deployment”, Computer Networks, Vol. 30, Issue 22-23, pp. 2081-2091. 17

ELC Technologies (2001), http://anemone.electricc.com/ Fielding, R., Gettys, J., Mogul, J., Frystyk, H. and Berners-Lee, T. ( 1999), “Hypertext Transfer Protocol -- HTTP/1.1”, RFC 2616, June. Foote, C. (2000), “Squirm – A Redirector for Squid”, http://www.wenet.com.au/squirm/. FreeBSD (2001), http://www.freebsd.org Gleeson, M. (2000), http://martin.gleeson.com/pwebstats/ Heddaya, A. (1998), “DynaCache: weaving caching into the Internet”, The 3rd International Web Caching Workshop, Manchester, England, June. Kozinski, M. (2000), http://www.geocities.com/maciej_kozinski/w3cache/squeezer.html Krishnan, P., Raz, D. and Shavitt, Y. (1999), “Transparent En-Route Caching in WANs”, The 4th International Web Caching Workshop. Legedza, U. and Guttag, J. (1998), “Using network-level support to improve cache routing”, Computer Networks, Vol. 30, Issue 22-23, pp. 2193-2201. National Chiao Tung University (2001), http://proxy.nctu.edu.tw National Sun Yat-Sen University (2001), http://proxy.nsysu.edu.tw National Taiwan University (2001), http://proxy.ntu.edu.tw Mark Nottingham (2000), http://www.mnot.net/scripting/python/WebLog/ Oetiker,

T.

and

Rand,

D.

(2001),

“MRTG:

Multi

Router

Traffic

Grapher”,

http://ee-staff.ethz.ch/~oetiker/webtools/mrtg/mrtg.html. Rodriguez, P., Sibal, S. and Spatscheck, O. (2000), “TPOT: Translucent Proxying of TCP”, The 5th International Web Caching and Content Delivery Workshop. Perkins, D. and McGinnis, E. (1997), Understanding SNMP MIBs, Prentice-Hall, Englewood Cliffs, NJ. Ramahefason, D. (2000), http://casimir.easynet.fr/ Squid Internet Object Cache (2001), http://squid.nlanr.net/Squid Tanenbaum, A. S. (1996), Computer Networks, 3rd edition, Prentice Hall, Inc., USA. Tennyson Maxwell Information Systems, Inc. (2001), http://www.tenmax.com/home.htm 18

Thompson, K., Miller, G. J. and Wilder, R. (1997), “Wide-area Internet traffic patterns and characteristics”, IEEE Network, November/December. Tseng, J. P. and Chen, H. H. (1999), “The Implementation and Evaluation of the Proxy Server’s Access Control System”, in Proceedings of TANet’99 Conference. Wessels, D. (2001), http://www.squid-cache.org/Doc/FAQ/ Williams, B. (1998), “Transparent web caching solutions”, The 3rd International Web Caching Workshop, Manchester, England, June.

19

Suggest Documents