Improving Web Prefetching by Making Predictions at Prefetch B. de la Ossa, J. A. Gil, J. Sahuquillo and A. Pont Department of Computer Engineering, Polytechnic University of Valencia Camino de Vera, s/n, 46022 Valencia (Spain)
[email protected], {jagil, jsahuqui, apont}@disca.upv.es Abstract— Most of the research attempts to improve Web prefetching techniques have focused on the prediction algorithm with the objective of increasing its precision or, in the best case, to reduce the user’s perceived latency. In contrast, to improve prefetching performance, this work concentrates in the prefetching engine and proposes the Prediction at Prefetch (P@P) technique. This paper explains how a prefetching technique can be extended to include our P@P proposal on real world conditions without changes in the web architecture or HTTP protocol. To show how this proposal can improve prefetching performance an extensive performance evaluation study has been done and the results show that P@P can considerably reduce the user’s perceived latency with no additional cost over the basic prefetch mechanism.
I. I NTRODUCTION Web prefetching is a technique to reduce web user’s perceived latency. Unlike other techniques with similar aims like web caching and web replication, which have been extensively researched and implemented, there are few attempts concerning web prefetching implementations for real world usage. Web prefetching preprocesses a user request before it is actually demanded during the idle time between two requests. This is a speculative technique, therefore it can negatively affect the system performance if it is not accurate enough since it consumes extra resources like network bandwidth or server time. In a previous work [1] we showed how web prediction and prefetching techniques can work efficiently in a real environment without modifying the standard HTTP protocol. To this end, we implemented Delfos, which makes predictions on the server side and the hints are reported to the web browser (also referred to as the web client) included on standard HTTP headers. The client decides what hints to prefetch during the user think time. In order to improve prefetch performance, in this paper we propose the technique Prediction at Prefetch (P@P), which allows the prediction algorithm located at the web server to provide hints not only for standard object requests, but also for prefetching requests. That is, this technique allows the prediction engine to provide more hints to the client. Existing prediction algorithms can be used together with P@P without any modification. The technique Prediction at Prefetch has been implemented on Delfos and tested on real world conditions. An important feature of the technique is that it does not require changes on
the web architecture, HTTP standard protocol or web browser. Trace-driven experiments were performed to study the impact of the proposed technique in the user’s perceived latency and traffic increase. Our results show that the use of Prediction at Prefetch can significantly reduce the user’s perceived latency with no additional cost over the original prefetch mechanism. The remainder of this paper is organized as follows. Section II describes the motivation of our work. Section III describes the basic method for web prediction and prefetching implemented on Delfos. Section IV presents Prediction at Prefetch. The evaluation methodology and the experimental environment are described in section V, and section VI presents the experiments and results. Finally, section VII presents the concluding remarks. II. M OTIVATION Web prefetching involves two main steps. First, predictions are made based on previous experience about user’s accesses and preferences, and the corresponding hints are provided. Second, the prefetching engine decides what objects are going to be prefetched. The prefetching engine can be located either at the web browser or at an intermediate web proxy server, while the predictions can be performed by the web server, as described in most research works [2], [3], but they can also be done by the web browser [4] or by an intermediate proxy [5]. In this work, it is assumed that the web server provides hints and the web client prefetchs them. Most research efforts related to web prefetching focus on how to improve theoretical indexes of the prediction algorithm like precision and recall [6], [7]. Nevertheless, there are few works dealing with the reduction of user’s perceived latency and traffic increase to evaluate and compare the proposals [8]. As a consequence, most research works focus on prediction algorithms to improve the precision of prefetching techniques, therefore in the literature we can find a wide set of prediction algorithms. They can be classified according to the type of information gathered and the data structure used for the prediction: object popularity [3], Markov models [9], [2], [10], web structure [11], [12], Prediction by Partial Matching [5], [13], [14], data mining [15] and genetic algorithms [16]. Unlike classical research, our proposal does not focus on the prediction algorithm. Instead, we focus on the way that web prefetching works to make it more efficient and to reduce the user’s perceived latency. In this sense, our proposal is
Fig. 1. Communication between the web browser, the web server and the prediction engine when using web prediction and prefetching
orthogonal to the prediction algorithm, that is, it and can be used with any prediction algorithm. In order to evaluate our proposal, in this paper we use two prediction algorithms: the Dependency Graph algorithm proposed by Padmanabhan and Mogul [2], which has been widely referenced in the literature; and the Double Dependency Graph algorithm recently proposed by Dom`enech et al. [17], which is an improvement over Dependency Graph as it provides better performance with similar cost.
Fig. 2.
• •
III. BASIC WEB PREFETCHING Fig. 1 shows the communication between the user, its web browser, the web server and the prediction engine on a basic web prediction and prefetching architecture. Predictions are generated by the prediction engine, and the web server provides them to the browser. The web browser may or may not prefetch the provided hints during its idle time. Mozilla Firefox is a web browser with web prefetching capacity. Web prefetching was firstly available in Mozilla Suite 1.2 (published at the end of 2002). Other web browsers based on the same Mozilla Foundation technologies include this capacity, e. g., SeaMonkey Netscape, Camino, and Epiphany. We use Mozilla Firefox in our experiments since it already implements all the required features regarding prefetching, that is: it is widely used by both casual and expert users, it is published with a free and open source license and its source code is freely available. The prefetching mechanism implemented on Mozilla [18] was firstly proposed by Padmanabhan and Mogul [2], and standardized in HTTP/1.1 RFC 2616 [19]. The web server can provide one or more URIs if it considers that the user is likely to visit them soon. These URIs or hints can be provided in three different ways: in a ‘meta’ tag on the HTML header, in a ‘link’ tag on the HTML body, or in a HTTP header included in the response, for example: Link: ; rel=prefetch The implementation of web prefetching in Mozilla features some interesting aspects:
•
• •
Communication with Prediction at Prefetch enabled
Prefetching occurs only when the web browser is idle. Prefetch requests sent by Mozilla include an additional HTTP request header indicating that it is a prefetch request. Mozilla does not require prefetch requests to be responded, so web servers can filter those requests, e. g., in server overload conditions. Hints are only prefetched when the object that includes the hints is demanded by the user. Only the provided URIs using the HTTP protocol are prefetched, not the embedded objects. URIs that contain parameters (the query part of the URI) are not prefetched. If the user clicks on a link while the browser is prefetching, the prefetch process is interrupted to satisfy the users’ real request. If there is any prefetching queue, it will be discarded. The object partially predownloaded is kept in cache and completed if the user demands it. Later, when the browser is idle again, new hints can be prefetched. IV. P REDICTION AT P REFETCH
The proposed technique to improve prefetching performance, Prediction at Prefetch (P@P), is a simple and effective technique that allows web clients to receive more hints without affecting negatively the precision of the prediction algorithm. Predictions are provided not only for objects requested by demand, but also by prefetch. Any web request received by the web server, either a fetch or a prefetch request, trigger a prediction and the resulting hints are included in the corresponding response. Notice that when the P@P technique is used, more hints are reported to the web browsers than when using the basic prediction. Therefore, it is expected that more objects will be
prefetched. As a consequence, the traffic will increase, but if the prediction algorithm is accurate enough, the object latency perceived by the user will be reduced. Properly configuring the aggressiveness of the prediction algorithm, both for fetch and prefetch requests, it is possible to reduce the latency in a higher ratio than the traffic increase. Fig. 2 shows an example of communication among the web browser, the web server and the prediction engine when working together web prediction, prefetching and Prediction at Prefetch. First, the user demands the object A. The prediction engine predicts that the user will demand objects B and H on a near future. The web browser, while idle, prefetchs the object B. The response to that prefetch request includes another hint, this time for object C. However, the web browser does not prefetch that hint, since it was provided on a prefetch request. If the user later demands object B, as the browser already will have it on its cache, it will be provided to the user with zero service time. Now the prefetched object B is considered as an object demanded by the user, so the hints included on its response can be prefetched: object C is finally prefetched. The hints provided to the browser on a prefetch request are prefetched only if the prefetched object is finally requested by the user. This ensures that those hints are as accurate as the hints provided on objects requested by demand. The predictions accomplished during a prefetch request should not update the information gathered by the prediction algorithm about the user’s behaviour and navigation patterns. The reason is that prefetched objects are not requested by the user but they are requested by the web browser based on a prediction that might or might not be successful. If the prediction algorithm wrongly assumes that a prefetch request is equivalent to a user request, it would produce an unrealistic learning of user’s navigation patterns. Notice that no modification is required on the web browser, but the web server that provides hints must be updated to handle conveniently the prefetch requests. Any existing prediction algorithm can be used with P@P without any modification. V. E VALUATION METHODOLOGY AND EXPERIMENTAL ENVIRONMENT
The purpose of the performed experiments is to show how much our proposal can reduce the latency per object perceived by the user, and which traffic increase is required to accomplish it. To this end, several experiments were ran both enabling and disabling our proposal, and varying the threshold of the prediction algorithm, because it affects the prefetching aggressiveness. The cost-benefit evaluation methodology described by Dom`enech et al in [8] has been used to perform fair comparisons. We evaluated the latency reduction per object, the traffic increase, and the object traffic increase, as defined by Dom`enech et al in [20], because the study focuses on the user’s point of view [21]. The latency per object is obtained from the service time reported by Apache web server or it is zero if the object is
TABLE I T RACE CHARACTERISTICS Characteristic
Value
Starting date
October 1st 2005
Ending date
February 1st 2006
Different objects requested User requests (Object accesses)
15.000 3.000.000
Avg. user requests per day
26.000
Bytes transferred (MB)
33.000
Avg. bytes transferred per day (MB) Requests of objects smaller/bigger than 10 kB
285 76%/24%
already in the browser cache. Thus, the latency reduction per object is the ratio of the latency perceived using prefetching to the latency without prefetching. The traffic increase quantifies in bytes the extra traffic incurred by the prefetched objects that are never requested by the user. We do not take into account the network overhead introduced by the transmission of hints on HTTP headers, as its size can be assumed negligible when compared to the objects size. We show the traffic increase as a ratio of the traffic generated with and without prefetching enabled. Object traffic increase quantifies in which percentage increases the number of objects that a client gets when using prefetching, compared to not using prefetch. We also measured the precision and recall performance indexes (as defined in [20]) to calculate the impact of the proposal in the prediction algorithm: P recision =
P ref etch hits P ref etch hits ; Recall = P ref etchs U ser requests
All performance indexes are obtained with a confidence interval of 95%. Nevertheless for the sake of clarity only average values are shown in the figures as the interval lengths are always less than 5% of the mean value. The prediction engine could be fed by a real web server that receives real requests from real users. However, in order to compare the performance with different configurations, a reproducible workload must be used. For this purpose we used a trace file logged by Apache 2 serving the web site of the School of Computer Science from the Polytechnic University of Valencia (www.ei.upv.es). The characteristics of the trace are summarized in Table I. The experiments do not include a preliminary training phase. Instead, the prediction algorithm constantly learns the user’s patterns during the experiments. This guarantees that the knowledge of the prediction algorithm about user patterns is updated at the time that the patterns change [22]. The length of the experiments is large enough so the experiments do not end on a transitional phase. The trainer used in the experiments starts requesting a page at the timestamp specified on the trace file. This means that the start of a page request is not related to the end time of the previous one from the same user. The embedded page objects are requested as soon as the main page object is served,
22% 20% 18% Latency reduction per object ratio
with two available connections for each client. Objects already cached have zero service time, while objects that are requested to the server have the service time specified by the Apache custom log file. In our experiments we have implemented the Dependency Graph and Double Dependency Graph prediction algorithms, as metioned on section II. Our current implementations of both algorithms take around 3 miliseconds to perform a prediction. The Dependency Graph prediction algorithm is based on a Markov model, and considers that two objects are more related as more frequently they are requested one after the other in a window containing the last accesses for that same client The algorithm builds a dependency graph that stores the access patterns to the objects. This graph keeps a node for each single object that has ever been accessed. There is an arc from node A to node B, if and only if, at any time node B was accessed after node A in a timeframe not longer than W accesses by the same client. W is the lookahead window size. The weight of an arc is the relation between the occurrence of the arc that goes from A to B and the occurrence of A. The prediction performed when a user access the object A, reports as hints the URIs of the nodes that receive an arc from A. The probability of each hint is the weight of each arc, and a threshold is applied to limit the number of reported hints and the overall quality of the prediction. The Double Dependency Graph prediction algorithm is based on a graph that keeps track of the dependencies among the objects accessed by the user. It distinguishes two classes of dependencies: to an object of the same page and to an object of another page. Like Dependency Graph, the graph has a node for every object that has ever been accessed. There is an arc from node A to B, if and only if, at some point in the time a client accessed to B within w accesses to A, where w is the lookahead window size. The arc is a primary arc if A and B are objects of different pages, that is, either B is an HTML object or the user accessed one HTML object between A and B. If there is no HTML accesses between A and B, the arc is secondary. The predictions are obtained by firstly applying a cutoff threshold to the weight of the primary arcs that leave from the node of the last user’s access. In order to predict the embedded objects of the following page, a secondary threshold is applied to the secondary arcs that leave from the nodes of the objects predicted in the first step. The Double Dependency Graph algorithm has the same order of complexity as the Dependency Graph, since it builds a similar graph but distinguishing two classes of arcs. To perform fair comparisons, Delfos was configured in the same way through the different experiments. Hints provided by the prediction algorithm with lower values of probability than the configured threshold are not reported to the web browser. So the lower the threshold is, the higher is the aggressiveness of the prediction algorithm. More aggressiveness means that more hints are provided to the web client, so more objects are prefetched and consequently more hits or misses will rise. The common options used on our experiments are: maximum allowed number of hints 100, lookahead window size 2
16% 14% 12% 10% 8% 6% basic
[email protected] [email protected] [email protected] [email protected] [email protected]
4% 2% 0% 0%
2%
4%
6%
8%
10% 12% 14% 16% 18% 20% 22%
Byte traffic increase ratio
Fig. 3. Latency reduction versus byte traffic increase with the Double Dependency Graph prediction algorithm
(without duplicates), and threshold values ranged from 0.1 to 0.8. VI. E XPERIMENTAL RESULTS A. Latency Fig. 3 shows the object latency reduction versus byte traffic increase results for two different prediction methods. The basic method, which performs predictions only for user requests as described in section III, and the P@P method that performs predictions also on prefetch requests as described on section IV. The curves shown in the figures have been obtained by varying the algorithm threshold (aggressiveness) from 0.1 to 0.8. As it is known and also showed by our results, there are some configurations that allow the basic prefetch to improve the latency per object in a percentage higher than the traffic is increased. So, prefetch is an interesting technique to reduce the latency per object perceived by the user if there is available bandwidth and the prediction algorithm is properly configured for the target workload. When using the P@P method, the threshold for those additional predictions can be set independently of standard prediction requests. On the experiments P@P is evaluated with different thresholds ranging from 0.5 down to 0.1 (
[email protected] down to
[email protected]). The objective of this study is not to find the optimal configuration of the P@P technique because it strongly depends on the environment conditions (bandwidth, server load, traffic, ...) but to demonstrate that we can always find a P@P configuration that outperforms the results of the basic prefetching technique. As observed, the prediction algorithm aggressiveness has a two-side effect on the results, as it achieves a reduction
30%
100% basic
[email protected] [email protected] [email protected] [email protected] [email protected]
25%
20% Precision per byte
Latency reduction per object ratio
90%
15%
80%
70%
10%
0% 0%
60%
basic
[email protected] [email protected] [email protected] [email protected] [email protected]
5%
20%
40%
60%
80%
100%
50% 0%
120%
2%
4%
6%
Byte traffic increase ratio
Fig. 4. Latency reduction versus byte traffic increase with the Dependency Graph prediction algorithm
B. Prediction-related performance indexes In this subsection we show the impact of our technique in the prediction related performance indexes, i.e., from the prediction algorithm point of view. The experiments performed using both prediction algorithms show similar behaviour. Nevertheless, as showed in the previous section, the Double Dependency Graph algorithm is more efficient than the Dependency Graph because it requires less traffic increase to achieve the same latency reduction. So
Fig. 5.
10% 12% 14% 16% 18% 20% 22%
Precision per byte versus byte traffic increase
30%
25%
20% Recall per byte
on the perceived latency but at expenses of a higher byte traffic increase. The P@P configurations for a given byte traffic increase can accomplish higher latency reduction per object than the basic prefetch, That means that for the same or similar latency reduction, there is always a P@P configuration that requires less byte traffic. Particularly interesting are
[email protected] and
[email protected], which provide the best ratio cost-benefit. Moreover, in some situations,
[email protected] provides a latency reduction up to 14% while increasing by about 8% the byte traffic. Fig. 4 depicts the latency reduction versus byte traffic increase obtained when using the Dependency Graph. Notice that this algorithm is much more aggressive than Double Dependency Graph, as it generates by about five times more byte traffic. As observed, the results show that the use of the Prediction at Prefetch technique provides a similar benefit on both algorithms. That is, the maximum distance between the best P@P curve and the basic prefetch, measured in absolute values, is similar when using the Double Dependency Graph (see Fig. 3) and the Dependency Graph algorithm (see Fig. 4). Nevertheless the Dependency Graph algorithm is less suitable to work with bandwidth restrictions because for a reasonable latency reduction it generates 20% more traffic than the Double Dependency Graph.
8%
Byte traffic increase ratio
15%
10%
basic
[email protected] [email protected] [email protected] [email protected] [email protected]
5%
0% 0%
2%
Fig. 6.
4%
6%
8% 10% 12% 14% 16% 18% 20% 22% Byte traffic increase ratio
Recall per byte versus byte traffic increase
due to space limits, we only present results for the Double Dependency Graph algorithm. Fig. 5 shows how the prediction method affects the precision per byte of the prefetched objects versus the byte traffic increase. The precision is reduced as the prediction algorithm becomes more aggressive. In general, the precision obtained with Prediction at Prefetch is better than the obtained with basic prefetching. This fact is accentuated with the more aggressive configurations. In these cases the recall per byte also increases, as shown in Fig. 6. This plot has similar shape to the one in Fig. 3 because the represented indexes are directly
220
190 M basic
[email protected] [email protected] [email protected] [email protected] [email protected]
210 200
180 M
basic
[email protected] [email protected] [email protected] [email protected] [email protected]
Total database operations
Nodes occurrence mean value
170 M 190 180 170 160
160 M
150 M
140 M
150 130 M 140 120 M
130 120
110 M 0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Basic threshold (aggressiveness)
Fig. 7.
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
Basic threshold (aggressiveness)
Mean nodes occurrence versus algorithm threshold (aggressiveness)
Fig. 8. Total database operations versus algorithm threshold (aggressiveness)
proportional (recall and the latency reduction). The P@P technique generally gets better cost-benefit ratio than the basic prefetching, being
[email protected] the one providing the best value. The maximum latency reduction and recall achievable by our proposal can be observed looking at Fig. 3 and 6, respectively. Neither the latency reduction nor the recall can be improved beyond a certain threshold (about 35% and 20%) when allowing a reasonable traffic increase (about 22%).
an idea of the CPU consumption required by the different configurations. Obviously, P@P requires more database operations, since the prediction algorithm makes more predictions. However, each prediction requires a similar processor time and similar database operations, whether the proposed P@P technique is used or not.
C. Resource consumption Concerning resource consumption, Fig. 7 gives an overview of the amount of information stored by the prediction algorithm with different configurations. Each individual object requested to the web server is represented by a node on the prediction algorithm data structure, and each time a user requests an object, its node occurrence is incremented. When the prediction algorithm is more aggressive, it provides more hints, and consequently the clients prefetch more objects. That means less objects requested to the web server by the user. The prediction algorithm learns the user patterns from user requests, not from prefetch requests, since those are speculative. When using more aggressive configurations the algorithm gathers less information, and this fact becomes a problem if the prediction algorithm receives so few requests that it is unable to appropriately learn user patterns. Experimental results obtained for a wide range of threshold values show that an excessive aggressiveness largely reduces the precision and does not improve the recall and perceived latency, thus it is not necessary to implement aggressive policies to improve prefetch performance. Fig. 8 depicts the amount of database operations performed by the prediction algorithm during the experiment. As it is directly proportional to the algorithm complexity, it gives us
VII. C ONCLUSIONS Prediction at Prefetch is a technique that pursues to improve web prefetching on real environments, permitting the prediction algorithm located at the web server to provide hints not only on normal fetch requests, but also on prefetch requests. We discussed on detail what characteristics are required in the web browser and the prediction engine in order to perform Prediction at Prefetch in a safely way. Mozilla is a well known web browser that satisfies all the requirements, thus, it can be used without any modification. Regarding the web server and the prediction engine, we proposed and implemented Prediction at Prefetch on Delfos, and tested it with Mozilla on real world usage. The additional aggressiveness allowed by the prediction engine when using the proposed technique reduces the latency per object perceived by the user at expenses of increasing the traffic. The effectiveness of Prediction at Prefetch with different prediction thresholds has been checked. The results of the experiments show that a properly configured prediction engine provides a good ratio cost-benefit. A latency reduction up to 14% was achieved while increasing by about 8% the byte traffic. ACKNOWLEDGMENT This work has been partially supported by Spanish Ministry of Education and Science and the European Investment Fund
for Regional Development (FEDER) under grant TSI 200507876-C03-01 and by La Catedra Telefonica de Banda Ancha e Internet (e-BA) from the Polytechnic University of Valencia. The authors would like to thank the technical staff of the School of Computer Science from the Polytechnic University of Valencia (www.ei.upv.es) for providing us recent and customized trace files logged by the web server from that school web site. R EFERENCES [1] B. de la Ossa, J. A. Gil, J. Sahuquillo, and A. Pont, “Delfos: the oracle to predict next web user’s accesses,” Unpublished, 2006. [2] V. Padmanabhan and J. C. Mogul, “Using preditive prefetching to improve world wide web latency,” Proceedings of the ACM SIGCOMM’96 Conference, Palo Alto, USA, 1996. [3] E. Markatos and C. Chronaki, “A top-10 approach to prefetching on the web,” Proceedings of INET ’98, Geneva, Switzerland, 1998. [4] W. Zhang, D. B. Lewanda, C. D. Janneck, and B. D. Davison, “Personalized web prefetching in mozilla,” Dept. of Computer Science and Engineering, Lehigh University, Bethlehem, USA, Tech. Rep. LU-CSE-03-006, 2003. [Online]. Available: http://www.cse.lehigh.edu/ ∼brian/pubs/2003/mozilla/ [5] L. Fan, P. Cao, W. Lin, and Q. Jacobson, “Web prefetching between lowbandwidth clients and proxies: Potential and performance.” Proceedings of the ACM SIGMETRICS Conference on Measurement and Modeling Of Computer Systems, pp. 178–187, 1999. [6] X. Dongshan and S. Junyi, “A new markov model for web access prediction,” Computing in Science and Engineering, vol. 4, no. 6, pp. 34–39, 2002. [7] A. Nanopoulos, D. Katsaros, and Y. Manolopoulos, “A data mining algorithm for generalized web prefetching.” IEEE Trans. Knowl. Data Eng., vol. 15, no. 5, pp. 1155–1169, 2003. [8] J. Dom`enech, A. Pont, J. Sahuquillo, and J. A. Gil, “A comparative study of web prefetching techniques focusing on user’s perspective.” IFIP International Conference on Network and Parallel Computing (NPC 2006), 2006. [9] A. Bestavros, “Using speculation to reduce server load and service time on the www,” Proceedings of the 4th ACM International Conference on Information and Knowledge Management, Baltimore, USA, 1995.
[10] I. Zukerman, D. W. Albrecht, and A. E. Nicholson, “Predicting users’ requests on the www,” UM ’99: Proceedings of the seventh international conference on User modeling, pp. 275–284, 1999. [11] B. D. Davison, “Predicting web actions from html content,” Proceedings of the 13th ACM Conference on Hypertext and Hypermedia, College Park, USA, 2002. [12] T. Ibrahim and C. Xu, “Neural nets based predictive pre-fetching to tolerate www latency,” Proceedings of the 20th IEEE International Conference on Distributed Computing Systems, Taipei, Taiwan, 2000. [13] T. Palpanas and A. Mendelzon, “Web prefetching using partial match prediction,” Proceedings of the 4th International Web Caching Workshop, San Diego, USA, 1999. [14] X. Chen and X. Zhang, “Popularity-based PPM: An effective web prefetching technique for high accuracy and low storage,” Proceedings of the 2002 International Conference on Parallel Processing, Vancouver, Canada, 2002. [15] A. Nanopoulos, D. Katsaros, and Y. Manopoulos, “Effective prediction of web-user accesses: A data mining approach,” Proceedings of the Workshop on Mining Log Data across All Customer Touchpoints, San Francisco, USA, 2001. [16] D. Bonino, F. Como, and G. Squillero, “A real-time evolutionary algorithm for web prediction,” Proceedings of the International Conference on Web Intelligence, Halifax, Canada, 2003. [17] J. Dom`enech, J. A. Gil, J. Sahuquillo, and A. Pont, “DDG: An efficient prefetching algorithm for current web generation,” Proceedings of the 1st IEEE Workshop on Hot Topics in Web Systems and Technologies (HotWeb), Boston, USA, November 2006. [18] D. Fisher and G. Saksena, “Link prefetching in mozilla: A server driven approach,” Proceedings of the 8th International Workshop on Web Content Caching and Distribution (WCW 2003), 2003. [19] “HTTP/1.1.” [Online]. Available: http://www.faqs.org/rfcs/rfc2616.html [20] J. Dom`enech, J. Sahuquillo, J. A. Gil, and A. Pont, “About the heterogeneity of web prefetching performance key metrics,” Proceedings of the 2004 International Conference on Intelligence in Communication Systems (INTELLCOMM 04), Bangkok, Thailand, November 2004. [21] J. Dom`enech, A. Pont, J. Sahuquillo, and J. A. Gil, “Cost-benefit analysis of web prefetching algorithms from the user’s point of view,” Proceedings of the 5th International IFIP Networking Conference, Coimbra, Portugal, May 2006. [22] J. Dom`enech, J. Sahuquillo, A. Pont, and J. A. Gil, “How current web generation affects prediction algorithms performance,” Proceedings of the SoftCOM 2005 International Conference on Software, Telecommunications and Computer Networks, Split, Croatia, September 2005.