204
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 10, NO. 2, JUNE 2013
Optimal Cache Timeout for Identifier-to-Locator Mappings with Handovers Hongbin Luo, Member, IEEE, Hongke Zhang, and Chunming Qiao, Fellow, IEEE
Abstract—The locator/ID separation protocol (LISP) proposed for addressing the scalability issue of the current Internet has gained much interest. LISP separates the identifier and locator roles of IP addresses by end point identifiers (EIDs) and locators, respectively. In particular, while EIDs are used in the application and transport layers for identifying nodes, locators are used in the network layer for locating nodes in the network topology. In LISP, packets are tunneled from ingress tunnel routers (ITRs) to egress tunnel routers in a map-and-encapsulation manner. For this purpose, an ITR caches on demand some mappings between EIDs and locators. Since hosts roam from place to place, however, their EID-to-locator mappings change accordingly. Thus, an ITR cannot store a mapping permanently but maintains for every mapping a timer whose default value is set to a given cache timeout. If the cache timeout for a mapping is too short, an ITR frequently queries the mapping system (control plane), resulting in a high traffic load on the control plane. On the other hand, if the cache timeout for a mapping is too long, the mapping could be outdated, resulting in packet loss and associated overheads. Therefore, it is desirable to set appropriate cache timeout for mapping items. In this paper, we analytically determine the optimal cache timeout for EID-to-locator mappings cached at ITRs to minimize the control plane load while remaining efficient for mobility. The results presented here provide valuable insights and guidelines for deploying LISP. Index Terms—Routing architecture, identifier/locator separation, cache timeout, handover process.
I. I NTRODUCTION HERE is a growing consensus that separating the locator and identifier roles of Internet Protocol (IP) addresses (i.e., identifier/locator separation, or ID/Loc separation) helps to address the scalability issue of the current Internet [1] - [2]. In ID/Loc separation, identifiers are used in the application and transport layers for identifying nodes, and locators are used in the network layer for locating nodes in the network topology. This makes it possible for nodes to change locators at any time without disrupting ongoing communication
T
Manuscript received on December 30, 2011, revised on August 28 and November 4, 2012. The associate editor coordinating the review of this paper and approving it for publication was X. Fu. This work was supported in part by the National Basic Research Program (“973 Program”) under Grant No. 2013CB329100, in part by the “863 Program” of China under Grant No. 2011AA010701 and 2011AA01A101, in part by the Natural Science Foundation of China (NSFC) under Grant No. 61271200, 61232017, 6083302, and in part by the Program for New Century Excellent Talents in University under Grant No. NCET-12-0767. H. Luo and H. Zhang are with the School of Electronic and Information Engineering, Beijing Jiaotong University (BJTU), Beijing 100044, China. They are also with the National Engineering Lab for Next Generation Internet Interconnection Devices at BJTU, Beijing 100044, China (e-mail: {hbluo, hkzhang}@bjtu.edu.cn). C. Qiao is with the Department of Computer Science and Engineering, State University of New York at Buffalo, 201 Bell Hall, Buffalo, NY 14260-2000, U.S.A. (e-mail:
[email protected]). Digital Object Identifier 10.1109/TNSM.2012.122612.110221
Fig. 1.
An illustration of packet forwarding in LISP.
sessions, thus supporting efficient mobility, multi-homing and traffic engineering [1]. Because of this, many newly proposed routing architectures use the idea of ID/Loc separation (e.g., [3], [4] and the references therein). In this paper, we address the problem of determining the optimal cache timeout for identifier-to-locator mappings in the locator/ID separation protocol (LISP) [3] since it is the most attractive ID/Loc separation approach [5]. In the rest of the section, we first briefly describe how packets are forwarded in LISP. We then present the motivations, contributions, and organization of the paper, respectively. A. Packet Forwarding in LISP LISP splits the IP address space into two orthogonal spaces: the end point identifier (EID) space used to identify the endhosts, and the routing locator space used to locate them in the Internet topology. In addition, LISP separates customer networks from provider networks. End hosts are located in customer networks and are identified by EIDs. By contrast, provider networks form the transit core and are used for providing data transit service at the inter-domain level using locators. Customer networks connect to provider networks through ingress/egress tunnel routers (ITRs/ETRs) that are located at the edges of customer networks and are globally identified by locators. LISP relies on the map-and-encapsulate mechanism to forward packets between end hosts, as illustrated in Fig. 1 where a corresponding node (CN) with EIDCN sends packets to a mobile node (MN) with EIDMN . As illustrated, the CN firstly sends packets with source EIDCN and destination EIDMN to its ITR (i.e., T R1 ), as illustrated by (1) in Fig. 1. When T R1 receives the first packet destined to the MN, it resolves an identifier-to-locator mapping for the MN by sending map-requests to the mapping system (control-plane), as illustrated by (2) in Fig. 1. When T R1 receives the MN’s
c 2012 IEEE 1932-4537/12/$31.00
LUO et al.: OPTIMAL CACHE TIMEOUT FOR IDENTIFIER-TO-LOCATOR MAPPINGS WITH HANDOVERS
identifier-to-locator mapping, it locally caches the mapping in a temporary cache, as illustrated by (3) in Fig. 1. After that, T R1 encapsulates the packet with an outer header whose destination is a resolved locator (i.e., Loc2 ) and whose source is T R1 ’s locator (i.e., Loc1 ), as illustrated by (4) in Fig. 1. T R1 then sends out the packet to the transit core that forwards the packet to T R2 (i.e., the MN’s ETR) by using the destination locator Loc2 . When T R2 receives the encapsulated packet, it strips the outer header of the encapsulated packet and then sends the decapsulated packet to its destination, as illustrated by (5) in Fig. 1. B. Motivation In LISP, an ITR needs to resolve an identifier-to-locator mapping for an MN through a map-resolution process as illustrated by (2) in Fig. 1. Accordingly, there is a mapresolution delay that is the delay between the time an ITR sends out a map-request and the time the ITR receives a map-reply for the MN. In general, the map-resolution delay (hundreds of ms) is several orders of magnitude higher than packet inter-arrival times (ns) [6], [7]. Therefore, to speed up packet forwarding and to protect the mapping system from floods of map-requests, ITRs are provisioned with mapping caches that temporarily store in use mappings [5], [8]. In particular, an ITR maintains a time-to-live (TTL) for every mapping in its local cache. If a mapping is used to encapsulate a packet with the matching destination host identifier before its TTL expires, the TTL is reset to a default value called cache timeout; otherwise, it is removed from the cache [9]. With the rapid increase of mobile devices such as laptops, cell phones, and portable digital assistants, CISCO speculates that number of mobile devices connected to the Internet will exceed the number of people on earth by the end of 2012 [10]. In addition, mobile devices frequently change their point-ofattachments (POAs) by roaming from an ETR to another [11]. For example, it is reported that most mobile devices roam once every several tens of seconds [12]. Unfortunately, every time an MN changes its POA, a handover occurs and the identifierto-locator mapping of the MN changes. Thus, if the cache timeout of the MN’s mapping cached at remote ITRs is too long, the mapping could be outdated and packets destined to the MN will be sent to an ETR that the MN does not attach to any more, which not only triggers packet loss at the ETR but also map-requests sent to the mapping system. On the other hand, if the cache timeout for a mapping is too short, ITRs will frequently query the mapping system, imposing high traffic load on the mapping system. Therefore, it is of critical importance to set an appropriate cache timeout for identifierto-locator mappings. C. Contributions In this paper, we analytically determine the optimal cache timeout by taking into consideration the packet inter-arrival process and the average handover rate (i.e., the average number of handovers that an MN endures per second). In particular, we investigate the effects of various factors (e.g., the average handover rate) on the optimal cache timeout assuming no cache overflow due to unlimited cache size at an ITR. Note
205
that this assumption facilitates our analysis and is reasonable due to reasons presented later in Section VII. B. Our main contributions are three folds. First, we find that the optimal cache timeout for an identifier-to-locator mapping is closely related to the employed handover process. For this purpose, we present two handover processes and analyze their impact on the optimal cache timeout. Second, we derive some analytical results for the optimal cache timeout under the two presented handover processes. Third, we present some numerical results to compare the optimal cache timeout under the two handover processes. We find that, among various factors including the distribution of packet inter-arrivals and the average handover rate, the optimal cache timeout is mainly determined by the average handover rate. D. Organizations The rest of the paper is organized as follows. In Section II, we briefly outline related work. In Section III, we present two handover processes considered in this paper. In Section IV and V, we derive the optimal cache timeout for the two handover processes, respectively. In Section VI, we present some numerical results to investigate the effects of various factors on the optimal cache timeout and to compare the optimal cache timeout under the two handover processes. In Section VII, we discuss the possible usage scenarios of our results. Finally, we conclude the paper in Section VIII. II. R ELATED W ORK Our research in this paper is related to plenty of work in the literature, such as future Internet architectures [2], scalable routing architectures [4], and identifier-to-locator mapping approaches [7], [13] - [16]. While it is impossible to enumerate all of them, we refer interested readers to [2], [4], [13] - [16] and the references therein for more details. Instead, we focus on these closely related to our work in this paper. In [9], based on real traffic traces, the authors analyzed the cost of caching identifier-to-locator mappings at ITRs. In particular, they assumed that the cache size is unlimited and that identifiers are IP-address-alike aggregatable. Based on these assumptions, they analyzed the required storage space, the cache entry lifetime, and the number of cache misses when the cache timeouts are 3 minutes, 30 minutes, and 300 minutes, respectively. When identifier-to-locator mappings are assumed to be static (i.e., no host mobility), it is found that the number of cache misses (i.e., the number of map-requests) decreases when the cache timeout increases. Based on real traffic data, the work in [16] also analyzed the number of cache misses as a function of cache timeout. The difference is that the work in [9] assumed that identifiers are IP-address-alike aggregatable but the work in [16] assumed that identifiers are flat and are not aggregatable. However, the work in [16] drew similar conclusions. While the work in [9] and [16] analyzed the number of cache misses based on the assumption that the cache size is unlimited, the work in [18] further analyzed the number of cache misses with the assumption that the cache size is limited. However, the evaluations in [18] did not consider the cache timeout. Coras et al. built a model to analyze the map-cache’s performance in terms of cache
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 10, NO. 2, JUNE 2013
III. T WO H ANDOVER P ROCESSES In this section, we describe two handover processes used for investigating the effect of handover processes on the optimal cache timeout. Handover 2 is presented in [21] and Handover 1 is a simple handover process proposed here for the purpose of comparison. The main notations used in describing the handover processes and the analysis are listed in Table I. A. Handover 1 In Handover 1, an ITR resolves a new identifier-to-locator mapping of an MN when the ITR receives a notification that the MN has moved to a new ETR. After that, the ITR tunnels packets for the MN to the new ETR. Fig. 2 highlights the main steps. Step 1) When the MN with EIDMN roams from a previous ETR (pETR) TRo to a new ETR TRn , TRn registers the mapping from EIDMN onto its locator (i.e., the EIDMN -toLOCn mapping) to the mapping server. In addition, since the
TABLE I N OTATIONS USED IN THE PAPER Notation
Meaning
Pi
Packets received by an ITR
ti
The time an ITR receives packet Pi
Δti
The time interval between Pi and Pi+1
Pd
The first packet discarded by a pETR
td
The time an ITR receives Pd
tr
The time an ITR receives the notification message from a pETR
δr
the average number of packets that the ITR receives during a map-resolution process
δo
The number of packets discarded during (td , tr )
cr
The cost of resolving an identifier-to-locator mapping
cc
The cost of caching a packet
cd
The cost of dropping a packet at an ITR
cp
The cost of processing a packet by mapping server
ced
The cost of dropping a packet at an ETR
RTi
The residence time that an MN attaches to ET Ri
λr
The average handover rate of an MN among ETRs
Mapping server
register the EIDMNto-LOCn mapping MN
3. Map-request and map-reply
4 Packets
ITR
Mapping table at ITR ID locator ... ... EIDMN LOCo ... ...
1 Packets TTL ... TMN ...
EIDMN en t
Packets
TRn (LOCn) ve m
miss rate [5]. The work in [8] also analyzed the map-cache’s performance in a way similar to [9]. Instead of considering a cache timeout of 300 minutes, it considered a cache timeout of 60 seconds and found that the efficiency of the cache is still very high, with more than 99% hit-ratio, while the cache size reduces almost by half compared to the 3 minutes cache timeout case [8]. As stated in [9], however, the results presented in the above mentioned work cannot be used in the case that identifiers roam from place to place since the collected data traces did not consider host mobility. By contrast, we analyze the optimal cache timeout in the case that identifiers roam. Thus our work complements these work. The work in [11] proposed the LISP mobile node (LISPMN) for dealing with host mobility in LISP, aiming at avoiding the triangular routing problem and at avoiding affecting the mapping system scalability and the routing system scalability. In LISP-MN, a mobile node needs to be enhanced with a light-weight ITR/ETR implementation and updates the new identifier-to-locator mapping with the map-server that is preconfigured to serve as an anchor-point of the MN. It is worthy of noting that, although LISP-MN does not affect the mapping system scalability, the MN’s map-server and remote ITRs still need to deal with mapping updates. The work in [19] - [21] proposed several handover schemes to deal with host mobility. Unlike LISP-MN that requires a mobile node to be enhanced with a light-weight ITR/ETR implementation, they aim at efficient mobility support without changing the protocol stack of mobile nodes. However, all these approaches did not consider how to optimize the cache timeout. We notice that the domain name system (DNS) [22] also heavily relies on cache mechanisms and DNS resolvers locally cache recently used resource records for certain cache timeouts. However there are two main differences. First, DNS resource records do not change so frequently as MNs change locators. Second, there is no handover process in DNS. By contrast, the optimal cache timeout of an identifier-to-locator mapping is closely related to employed handover processes, as will be shown later.
mo
206
2 TRo (LOCo)
MN EIDMN
Fig. 2. Illustration for the handover process 1, assuming that an MN with EIDMN roams from a pETR TRo to a new ETR TRn .
ITR does not know the MN’s movement, it still sends packets to the pETR TRo , as illustrated by (1) in Fig. 2. Step 2) When the pETR TRo receives a packet destined to the MN and finds that the MN does not attach to it, it discards the packet. Here, we assume that an ETR can find whether or not an MN attaches to it through some mechanisms. The pETR TRo then sends a notification message to the ITR indicating that the MN has moved, as illustrated by (2) in Fig. 2. Denote the first packet discarded by the pETR be Pd and the time the ITR sends Pd as td . Step 3) When the ITR receives this notification, it sends a map-request to the mapping server, which returns a map-reply to the ITR, as illustrated by (3) in Fig. 2. We assume that, when the ITR sends the map-request message to the mapping server, it also includes the current mapping of the MN so that the mapping server can return the new identifier-to-locator mapping of the MN to the ITR. Denote the time the ITR receives the notification from TRo as tr . Notice that packets sent during td and tr will be discarded by the pETR TRo . In addition, when the ITR receives packets destined to the MN during the map resolution process, the ITR may send these packets to the mapping server, which then forwards them to the MN. It may also locally cache these packets and send
LUO et al.: OPTIMAL CACHE TIMEOUT FOR IDENTIFIER-TO-LOCATOR MAPPINGS WITH HANDOVERS
B. Handover 2 Handover 2 uses a tunneling between the pETR and the new ETR to forward packets during the handover process. Fig. 3 illustrates the main steps of Handover 2. Step 1’) When an MN moves from a pETR TRo to a new ETR TRn , the new ETR TRn registers a new identifier-tolocator mapping of the MN, i.e., the mapping from EIDMN onto LOCn , as illustrated by (1) in Fig. 3. Step 2’) When the mapping server (storing the mapping of the MN) receives the registration request from the new ETR TRn , it retrieves the old identifier-to-locator mapping of the MN, i.e., the mapping from EIDMN onto LOCo , and then sends this old mapping to the new ETR TRn , as illustrated by (2a) in Fig. 3. In addition, it sends the EIDMN -to-LOCn mapping to the pETR TRo , as illustrated by (2b) in Fig. 3. Step 3’) When the new ETR TRn receives the old mapping, it knows that the MN is moved from TRo to it. As a result, it sets up a tunnel with TRo , as illustrated by (3) in Fig. 3. Step 4’) When the pETR TRo receives a packet to the MN from the ITR, it forwards the packet to the new ETR TRn through the tunnel between them. In addition, it notifies the new ETR TRn the identifier-to-locator mapping of the packet source so that the new ETR TRn can directly send packets destined to the source to the ITR. At the same time, the pETR TRo notifies the ITR the new identifier-to-locator mapping for the MN so that the ITR can directly send packets destined to the MN to the new ETR TRn , as illustrated by (4) in Fig. 3. Step 5’) When the ITR receives the notification message from the pETR TRo , it updates its local cache and directly sends subsequent packets destined to the MN to the new ETR TRn , as illustrated by (5) in Fig. 3. When the new ETR receives packets from the ITR, it tears down the established tunnel between it and the pETR. In the above handover process, we assume that mapping servers do not know the correspondent nodes (CNs) with which an MN is communicating because an MN often communicates with multiple CNs at the same time [19] and it imposes significantly larger storage and processing overhead on mapping servers to maintain such information. As a result,
1. register the EIDMN-to-LOCn mapping; 2a. notifies the EIDMN-toLOCo mapping to TRn;
MN 2a lin
g
TRn (LOCn) en t mo
4 Mapping table at ITR ID locator TTL ... ... ... EIDMN LOCo TMN ... ... ...
EIDMN
ve m
5
Tu nn
Packets ITR
1
3.
2b. notifies the EIDMN-toLOCn mapping to TRo;
Mapping server
2b
them after it receives the new identifier-to-locator mapping of the MN. In the former case, it significantly increases the processing overhead of mapping servers. On the other hand, the later case requires that the ITR has enough storage space to cache these packets. Step 4) When the ITR receives the map-reply from the mapping server, it updates the mapping item of the MN. After that, it sends packets destined to the MN to the new ETR TRn , as illustrated by (4) in Fig. 2. Notice that it is possible for an ETR to receive multiple packets from the same ITR and for the same MN that does not attach to the ETR any more. In this case, the ETR discards all these packets and we call them as a discarded block of packets. Every time an ETR receives a discarded block of packets destined to an MN but finds that the MN does not attach to it any more, it sends a single notification message for the whole block of packets to the packet’s ITR, which sends only one map-request to the mapping server.
207
TRo (LOCo)
MN EIDMN
Fig. 3. Illustration for the second handover process, assuming that an MN with EIDMN roams from a pETR TRo to a new ETR TRn .
when they receive map registrations for the MN, it cannot notify these CNs’ ITRs about the MN’s movement. We note that packets can be forwarded to their destinations by using consecutive tunneling in order to reduce the number of map-requests sent to mapping servers. However, this comes at cost. For example, it requires ETRs to maintain more network state (e.g., tunnels) and consumes more network bandwidth. This is because a packet is firstly sent to a pETR that then tunnels the packet to the new ETR, which not only lets the packet pass through longer hops but also leads to tunneling overhead. Thus we do not consider consecutive tunneling in this paper. For example, when the MN in Fig. 3 moves to a new ETR TRn , we assume that a packet cannot be sent to the MN via TRo , then via TRn and TRn . Notice also that Handover 2 is a representative scheme since several other handover schemes can be obtained by slightly modifying it. For example, the scheme proposed in [19] is an extended version of Handover 2 in order to deal with indirect mapping. Similarly, the scheme proposed in [20] is also a modified version of Handover 2 in order to reduce packet loss during the handover process. In particular, while in Handover 2, the tunneling between the pETR and the new ETR is established after the new ETR knows the identifier-tolocator mapping of the MN, the tunneling is established after the new ETR detects the attachment of the MN in [20]. C. Comparison with Mobile IP In the above descriptions, some terms (e.g., binding update) used in the two handover schemes are similar to those used for mobile IPv6 (MIPv6) and mobile IPv4 (MIPv4) [23] [24]. However, the two handover schemes are different from those for IPv6 and IPv4. For example, in the basic MIPv4, the binding of an MN is always sent to the home agent of the MN and only the home agent locally caches the binding of the MN. Therefore, it is not necessary to investigate how long the home agent should cache the binding. On the other hand, in the above two handover processes, it is possible that many ITRs locally cache the MN’s binding, which makes it possible for the MN’s binding cached at some ITRs to become outdated. Furthermore, it is difficult to actively update the binding cached at every ITR [19]. While the handover schemes for MIPv4 and MIPv6 do not need to update the mappings cached at ITRs, they generally encounter the triangular routing problem since a packet needs
208
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 10, NO. 2, JUNE 2013
Td 3 TTL = T maprequest
P0 t0
TTL = T map-reply P1 't0
resolution delay
TTL = T
1
t1 t0
2
TTL = T P2
t1
't1
t2 t1
t2
't2
t3 t 2
P3 t3 map-request
map-reply
Fig. 4. Illustration for the considered policy used to update identifier-tolocator mapping items in a TR’s local cache.
to be firstly forwarded the home agent (or a similar node) and then be forwarded to the MN. On the other hand, with the two handover schemes, packets can be directly tunneled from the ITR to the ETR to which the MN attaches after a handover process completes, thus avoiding the triangular routing problem. IV. O PTIMAL C ACHE T IMEOUT FOR H ANDOVER 1 Before we quantitatively compare the effect of handover processes on the optimal cache timeout, we describe the policy used to update identifier-to-locator mapping items in an ITR’s local cache. Like [9] and [16], we assume that ITRs update mapping items in their identifier-to-locator mapping tables as follows: Once a mapping item is used before its TTL expires, its TTL is reset to be the cache timeout T ; otherwise, the mapping item is removed from the ITR’s local mapping table. We illustrate this using the example shown in Fig. 4, where we assume four packets P0 , P1 , P2 and P3 sequentially arrive at t0 , t1 , t2 and t3 , respectively. We also assume that, when packet P0 arrives at an ITR, the ITR does not cache an identifier-to-locator mapping of the EID. As a result, when P0 arrives at the ITR, the ITR cannot find a mapping in its local mapping table (i.e., cache miss). It then sends a maprequest in order to resolve an identifier-to-locator mapping of the EID. After a certain resolution delay, the ITR receives a map-reply which is assumed to contain an identifier-to-locator mapping of the EID. At this time, the ITR caches the resolved mapping into its local identifier-to-locator mapping table and sets the TTL of the mapping to be T , as illustrated by (1) in Fig. 4. Since packet P1 arrives before the TTL expires, the ITR simply resets the TTL to be T , as illustrated by (2) in Fig. 4. Similarly, since packet P2 arrives before the TTL expires, the ITR resets the TTL to be T , as illustrated by (3) in Fig. 4. However, since the TTL of the mapping of the EID expires when packet P3 arrives, the corresponding mapping is removed from the local mapping table of the ITR. Therefore, the ITR needs to send a map-request again in order to resolve a mapping of the EID, as illustrated in Fig. 4. In order to facilitate our description and analysis, we denote the time interval of two subsequent packets Pi and Pi+1 that arrive at the same ITR and for the same destination identifier as Δti . In the example shown in Fig. 4, Δt0 = t1 - t0 , Δt1 = t2 - t1 , and Δt2 = t3 - t2 . In particular, we use the model shown in Fig. 5 in which we consider the packet arrival process at an ITR as an infinite point process. As shown, the ITR initially does not cache any mapping of the considered destination identifier. As a result, when it receives a first
packet P0 destined to the identifier, it cannot find a mapping locally and sends a map-request to the mapping server. After a certain resolution delay, it receives a map-reply and stores the resolved mapping in its local mapping table. In addition, it sets for the mapping a TTL that equals to T . When subsequent packets arrive, the ITR simply sends them to the resolved locator that is assumed to correspond to ETR1 in Fig. 5. At some point, the destination identifier attaches to ETR2 , which makes the cached mapping at the ITR be outdated. However, the ITR is unaware of this and still sends packets to ETR1 . When ETR1 receives these packets, it sends a notification message to the ITR, which then sends a maprequest to the mapping server. After a certain resolution delay, the ITR receives a new mapping of the destination identifier, stores this new mapping in its local mapping table, and sets a TTL for the mapping. In addition, it sends subsequent packets to the resolved new locator that is assumed to correspond to ETR2 in Fig. 5. This process continues until at some time when a packet (say, Pl ) arrives but no packet arrives before the TTL expires (i.e., the packet interval is larger than the cache timeout T ). Accordingly, the mapping of the destination identifier is removed from the ITR’s cache. If a packet arrives after the TTL expires, the above process from the arrival of P0 to the expire of the TTL repeats. For ease of presentation, we view the packets from P0 to Pl as a chunk of packets. Notice that the number of packets in a chunk depends on the cache timeout T . If T is very large, this number may be very huge; otherwise, this number may be very few (e.g., only one or two). Given T , we let Prm (T ) be the probability that a chunk of packets arrives at an ITR. From the above definition, the probability that a chunk of packets arrives at an ITR equals to the probability that P0 arrives at the ITR (i.e., a cache miss occurs). Similarly, given T , we let Pro (T ) be the probability that a discarded block of packets (due to an outdated mapping) arrives at an ITR. Note that, from a map-server’s perspective, the cache timeout is desired to be as long as possible in order to reduce the number of map-requests. However, from the perspective of a MN, the cache timeout is desired to be as small as possible in order to reduce packet loss by reducing the probability that a cached mapping is outdated. In order to make tradeoffs, we are interested in the average cost caused by cache misses and outdated mappings. Firstly, when a cache miss occurs, we assume that an ITR has three policies to deal with a packet during a map-resolution process. Policy 1) The ITR drops the received packets. We use δr to denote the average number of packets that the ITR receives during a map-resolution process. In addition, we assume that the cost of resolving a mapping is cr and that of dropping a packet is cd . Thus the cost of dropping packets due to a cache miss is cd (δr + 1) since the packet triggering the mapresolution process is dropped too. Policy 2) The ITR locally caches the received packets. In this case, these packets will be sent to their destination when the ITR receives a desired mapping of the destination identifier. In this case, we assume that the cost of caching a packet is cc . Obviously, the number of packets that need to be cached depends on δr . Accordingly, the total caching cost
LUO et al.: OPTIMAL CACHE TIMEOUT FOR IDENTIFIER-TO-LOCATOR MAPPINGS WITH HANDOVERS
209
A chunk of packets 'tk j d T (1 d j n i )
Resolution delay 't d T ' k
P1
Cache miss; Send a maprequest Enter ETR1
Fig. 5.
Pk+1 Receive a map-reply
Resolution delay
Pk+i
Pk+i+1
Pk+i+n
Pk+i+n+1 Pk+i+n+m
Packet loss Receive the packet loss notification; Outdated Send a map-request mapping Enter ETR2
't ! T
P0
Pl T
TTL expires Cache miss; Send a maprequest
Receive a map-reply
Enter ETRj
The model used for calculating the optimal cache timeout in handover process 1.
't
RT1' Pk+i+1
Pk+i RT1
RT2
Enter ETR1 Enter ETR2
Fig. 6.
't d T
Pk
{
P0
A chunk of packets
RTK
...
RTK+1
Enter ETRK Enter ETRK+1
Model used for calculate Pro (T ).
due to a cache miss is cc (δr + 1) since the packet triggering the map-resolution process is also cached. Policy 3) The ITR sends the received packets to mapping servers which forward them to their destination. In this case, every time mapping servers receive a packet, they are required to find where to forward the packet. Therefore, if an ITR sends a packet to mapping servers, we view it as a map-request. We assume that the cost of mapping servers processing a packet is cp . Accordingly, the total cost of processing packets due to a cache miss is cp δr . Secondly, in the case that a mapping is outdated, the packets sent during the period (td , tr ) will be dropped. Assume that the cost of discarding a packet is ced and the average number of such discarded packets during the period (td , tr ) is δo . Thus the total cost of discarding such packets is ced δo . In addition, when a mapping is outdated, an ITR also needs to resolve a new mapping. Therefore, depending on the policy the ITR is using, some other cost is incurred. ¯ ) caused by cache misses Therefore, the average cost C(T and outdated mappings in Policy 1 can be given by ¯ ) = (cr + cd (δr + 1))P rm (T ) + (cr + cd δr + ced δo )P ro (T ). C(T
¯ ) in Policies 2 and 3 can be Similarly, the average cost C(T given by ¯ ) = (cr + cc (δr + 1))P rm (T ) + (cr + cc δr + ced δo )P ro (T ), C(T
and ¯ ) = (cr + cp δr )P rm (T ) + (cr + cp δr + ce δo )P ro (T ), C(T d respectively. To facilitate our analysis, we combine the above equations into a more general equation shown below. ¯ ) = (cr + cc (δr + 1) + cd (δr + 1) + cp δr )P rm (T ) C(T (1) +(cr + cc δr + cd δr + cp δr + ced δo )P ro (T ) Note that, to calculate the optimal cache timeout T in Policy 1, we only need to let cc = cp = 0 in Eq. 1. Similarly, we
let cd = cp = 0 and cd = cc = 0 in Eq. 1 to calculate the optimal cache timeout T in Policies 2 and 3, respectively. We will now derive Prm (T ) and Pro (T ), respectively. To calculate Prm (T ), we let Δt be generally distributed with probability density function fc (Δt ), and Fc (Δt ) be the cumulative density function of Δt . Above we have stated that Prm (T ) equals to the probability that a chunk of packets arrives at an ITR. Accordingly, Prm (T ) equals to the probability that the interval of two consecutive packets (i.e., Δt) is larger than T , that is: P rm (T ) = P r{Δt > T } = 1 − Fc (T )
(2)
To calculate Pro (T ), we use an approach similar to those used in [25], [26]. In particular, we let RT1 , RT2 , · · · , RTK denote the time intervals that an identifier attaches to a sequence of ETRs denoted by ETR1 , ETR2 , · · · , ETRK , respectively. Let Pk +i+1 be the fist packet of a discarded block of packets, and Pk +i be the packet that arrives immediately before Pk +i+1 , as illustrated in Fig. 6. We consider the number of possible handovers that a destination identifier may incur during the time interval between the arrival of Pk +i+1 and that of Pk +i . Let RT1 denote the time interval between the arrival of Pk +i and the time the destination identifier exits ETR1 . In addition, we assume that RT1 , RT2 , · · · be independently and identically distributed (iid) with a general probability density function fr (t ), whose corresponding cumulative probability density function is Fr (t ). Let E [RTi ] = 1 /λr and fr (t ) be the probability density function of RT1 , where λr is the average handover rate of an MN among ETRs. From the random observer property [27], we have ∞ fr (t) = λr fr (τ )dτ = λr [1 − Fr (t)]. (3) t
By its definition, Pro (T ) is the probability that a discarded block of packets arrives at an ITR. In handover process 1, this implies that the inter-arrival of two consecutive packets is less than the given cache timeout T and that, based on this condition, the inter-arrival of two consecutive packets is larger than RT1 . As a result, P ro (T ) can be calculated as follows: P ro (T ) = P r(Δt > RT1 |Δt ≤ T ) × P r(Δt ≤ T ) P r(Δt > RT1 , Δt ≤ T ) × P r(Δt ≤ T ) = P r(Δt ≤ T ) = P r(RT1 ≤ Δt ≤ T ), (4) where Pr (Δt ≤ T ) is the probability that there is an identifier-to-locator mapping of a destination identifier cached at an ITR and equals Fc (T ), and Pr (Δt > RT1 |Δt ≤ T ) is
210
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 10, NO. 2, JUNE 2013
the conditional probability that a mapping is outdated given the fact that there is an identifier-to-locator mapping of a destination identifier cached at an ITR. Given T , T T P r(RT1 < Δt ≤ T ) = fc (Δt)dΔtfr (t)dt 0 t T [Fc (T ) − Fc (t)][1 − Fr (t)]dt. (5) = λr
definition, Pro (T ) in handover process 2 can be represented as: P ro (T ) = P r(RT1 + RT2 < Δt|Δt ≤ T )P r(Δt ≤ T ) P r(RT1 + RT2 < Δt, Δt ≤ T ) × P r(Δt ≤ T ) = P r(Δt ≤ T ) = P r(RT1 + RT2 < Δt ≤ T ). (9)
Notice that, as defined previously, RT1 is the interval between the arrival of Pk +i and the time that the destination identifier Thus, we have exits the pETR (e.g., ETR1 in Fig. 6). In addition, Pro (T ) in T P ro (T ) = λr [Fc (T ) − Fc (t)][1 − Fr (t)]dt. (6) Eq. (9) is different from that in Eq. (4), since Δt in handover process 2 has to be greater than the sum of RT1 and RT2 0 but that in handover process 1 is only required to be greater From Eqs. (1), (2), and (6), we have than RT1 . This is because in handover process 2, an ITR sends packets directly to the new ETR after it receives the ¯ ) = (cr + cc (δr + 1) + cd (δr + 1) + cp δr )[1 − Fc (T )] C(T new identifier-to-locator mapping of the destination identifier e +(cr + cc δr + cd δr + cp δr + cd δo )λr in Step 4’). Accordingly, only when RT1 + RT2 < Δt, the T mapping cached at the ITR is outdated and packets destined [Fc (T ) − Fc (t)][1 − Fr (t)]dt (7) to the corresponding identifier will be dropped (i.e., there is a 0 discarded block of packets). ¯ ), we have By calculating the derivative of C(T In order to calculate this probability, we let Lc (s), Lr (s), and Lr (s) be the Laplace transforms of fc (t ), fr (t ), and fr (t ), ¯ dC(T ) = −(cr + cc (δr + 1) + cd (δr + 1) + cp δr )fc (T ) respectively. From Eq. (3), we have dT ∞ ∞ T −st e (s) = e f (t)dt = e−st λr [1 − Fr (t)]dt L +(cr + cc δr + cd δr + cp δr + cd δo )λr fc (T ) [1 − Fr (t)]dt. r r 0
0
0
¯ ) reaches its minimum, dC(T ¯ )/dT = 0. Thus we When C(T have T [1 − Fr (t)]dt λr 0
cr + cc (δr + 1) + cd (δr + 1) + cp δr . = cr + cc δr + cd δr + cp δr + ced δo
(8)
Note that, if we only consider the cost of resolving identifierto-locator mappings (i.e., cc = cd = cp = ced = 0), the above equation becomes T λr [1 − Fr (t)]dt = 1. 0
V. O PTIMAL C ACHE T IMEOUT FOR H ANDOVER 2 In handover process 2, Prm (T ) can also be calculated using Eq. (2). However, Pro (T ) in handover process 2 is different from that in handover process 1. In handover process 1, when there is an identifier-to-locator mapping of a destination identifier cached at an ITR but the destination identifier has moved from a pETR to a new ETR, a packet will be discarded by the pETR when it receives the packet. On the other hand, in handover process 2, when a destination identifier moves from a pETR to a new ETR, the pETR forwards packets to the new ETR through a tunnel between them when it receives packets to the destination identifier. Based on our assumption, only in the case that the destination identifier moves from the pETR to the new ETR, and then moves to another new ETR, packets to the destination identifier are discarded. Therefore, by its
0
λr = [1 − Lr (s)]. s
(10)
At the same time, let ξ = RT1 + RT2 , fξ (t) and Lξ (s) be the probability density function and the Laplace transform of ξ. From the independence of RT1 and RT2 , we have
Lξ (s) = E[e−sξ ] = E[e−sRT1 ]E[e−sRT2 ] λr [1 − Lr (s)]Lr (s). = Lr (s) × Lr (s) = s As a result, fξ (t) is given by σ+j∞ 1 fξ (t) = Lξ (s)est ds 2πj σ−j∞ σ+j∞ 1 λr = [1 − Lr (s)]Lr (s)est ds. 2πj σ−j∞ s
(11)
(12)
Thus, P ro (t) in handover process 2 is given by T T fc (Δt)dΔtfξ (t)dt P ro (t) = P r(ξ < Δt ≤ T ) = 0 t T [Fc (T ) − Fc (t)]fξ (t)dt. (13) = 0
¯ ) in handover process 2 is Therefore, the average cost C(T ¯ ) = (cr + cc (δr + 1) + cd (δr + 1) + cp δr )P rm (T ) C(T +(cr + cc δr + cd δr + cp δr + ced δo )P ro (T ) = (cr + cc (δr + 1) + cd (δr + 1) + cp δr ){1 − Fc (T )} +(cr + cc δr + cd δr + cp δr + ced δo ) T × [Fc (T ) − Fc (t)]fξ (t)dt. 0
LUO et al.: OPTIMAL CACHE TIMEOUT FOR IDENTIFIER-TO-LOCATOR MAPPINGS WITH HANDOVERS
¯ ), we calculate In order to get the minimum value of C(T its derivative and obtain ¯ ) dC(T = −(cr + cc (δr + 1) + cd (δr + 1) + cp δr )fc (T ) dT T +(cr + cc δr + cd δr + cp δr + ced δo )fc (T ) fξ (t)dt. 0
¯ ) reaches its minimum, it must hold that When C(T ¯ dC(T )dT = 0. Thus we have T cr + cc (δr + 1) + cd (δr + 1) + cp δr fξ (t)dt = . (14) cr + cc δr + cd δr + cp δr + ced δo 0 Bringing Eq. (12) into the left term of the Eq. (14), we have T T σ+j∞ 1 λr [1 − Lr (s)]Lr (s) st e dsdt fξ (t)dt = s 0 0 2πj σ−j∞ σ+j∞ λr [1 − Lr (s)]Lr (s) T st 1 e dtds = 2πj σ−j∞ s 0 σ+j∞ 1 λr [1 − Lr (s)]Lr (s) esT − 1 = ds 2πj σ−j∞ s s σ+j∞ sT λr (e − 1)[1 − Lr (s)]Lr (s) = ds. (15) 2πj σ−j∞ s2 From Eqs. (14) and (15), we can obtain the optimal value of T in handover process 2. While our analytical model presented above cannot be directly applied to all handover processes, we note that for some of the variations discussed above (e.g., [19], [20]), the part that deals with P rm (T ) is still applicable, and the part that deals with P ro (T ) can be applied (or easily modified) to accommodate other handover processes. VI. N UMERICAL R ESULTS In our above analysis, we assume that Δt and ri are generally distributed. As stated previously, T is irrelevant to the distribution of Δt . In this section, we present some numerical results by assuming that ri follows the exponential distribution: fr (t) = λr e−λr t , t ≥ 0. (16) λr . Lr (s) = s + λr
(17)
In addition, the CDF Fr (t ) of fr (t ) is given by Fr (t) = 1 − e−λr t , t ≥ 0.
(18)
In order to calculate the optimal cache timeout T in handover process 1, we bring Eq. (18) into Eq. (8) and have T cr + cc (δr + 1) + cd (δr + 1) + cp δr = λr [1 − Fr (t)]dt cr + cc δr + cd δr + cp δr + ced δo 0 T [1 − (1 − e−λr t )]dt = 1 − e−λr T . = λr 0
Rewriting the above equation, we have cr + cc (δr + 1) + cd (δr + 1) + cp δr cr + cc δr + cd δr + cp δr + ced δo ced δo − cc − cd = . (19) cr + cc δr + cd δr + cp δr + ced δo
e−λr T = 1 −
From the above equation, it is obvious that, when ced δo − cc −cd = 0, the optimal cache timeout should be ∞. Assuming that ced δo − cc − cd > 0, we obtain the optimal cache timeout T in handover process 1. ce δ −c −c
T =−
c d d o ln cr +cc δr +c e d δr +cp δr +c δo
d (20) λr To calculate the optimal cache timeout T in handover process 2, we bring Eqs. (17) and (15) into Eq. (14), and obtain the following equation. T σ+j∞ sT (e − 1)[1 − Lr (s)]Lr (s) λr fξ (t)dt = ds 2πj s2 0 σ−j∞ σ+j∞ sT λr (e − 1)[1 − s+λ ] λr λr r s+λr = ds 2πj σ−j∞ s2 σ+j∞ sT λ2 e −1 = r ds 2πj σ−j∞ s(s + λr )2 σ+j∞ σ+j∞ 1 1 λ2r esT λ2r = ds − ds 2πj σ−j∞ s(s + λr )2 2πj σ−j∞ s(s + λr )2
λ2
λ2 esT
r Let G1 (s) = s(s+λr r )2 and G2 (s) = s(s+λ 2 , respectively. In r) addition, let g1 (t) and g2 (t) be the inverse Laplace transforms of G1 (s) and G2 (s), respectively. It is obvious that
g1 (t) = −λr te−λr t − e−λr t + 1, t ≥ 0.
(21)
Accordingly, we have g2 (t) = g1 (t + T ) = −λr (t + T )e−λr (t+T ) − e−λr (t+T ) + 1, t ≥ −T. (22) Since 1 2πj
σ+j∞
1 2πj
λ2r ds s(s + λr )2
σ−j∞ σ+j∞
1 = 2πj and
The Laplace transform of fr (t ) is given by
211
σ−j∞
σ+j∞
λ2r esT ds s(s + λr )2
σ−j∞ σ+j∞
1 = 2πj
σ−j∞
we have
λ2r est ds|t=0 = g1 (t)|t=0 = g1 (0), s(s + λr )2
0
T
λ2r esT est ds|t=0 = g2 (t)|t=0 = g2 (0), s(s + λr )2 fξ (t)dt = g2 (0) − g1 (0).
(23)
Bringing Eqs. (21) and (22) into the above equation, we obtain that T fξ (t)dt = −λr T e−λr T − e−λr T + 1. (24) 0
From the above equation and Eq. (14), we have λr T e−λr T +e−λr T =
ced δo − cc − cd . (25) cr + cc δr + cd δr + cp δr + ced δo
From Eq. (25), it is obvious that the optimal cache timeout T in Handover 2 is ∞ if ced δo − cc − cd = 0. When ced δo −
212
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 10, NO. 2, JUNE 2013
6
10
4
10
5
10 The optimal cache timeout T
The optima cache timeout T (seconds)
Handover process 2, λr = 0.001
3
10
Handover process 1, λ = 0.001 r
2
10
Handover process 1, λ = 0.01
4
10
3
10
2
10
λr = 0.01
1
10
λ = 0.001 r
r
1
10
λr = 0.0001
0
10
Handover process 2, λ = 0.01
λr = 0.00001
r
−1
10 0
10
0
0.1
0.2
0.3
0.4
0.5 a
0.6
0.7
0.8
0.9
Fig. 9. Fig. 7. Comparison of the optimal cache timeout T in Handover 1 and Handover 2 when a = c +c cdδ δo+c δ varies. r
c r
d o
5.5 5 4.5
3.5
0.1
0.2
0.3
0.4
0.5 a
0.6
0.7
0.8
0.9
1
The effect of λ on the optimal cache timeout T in Handover 1.
timeout T of the two handover processes by varying a. Fig. 7 shows the optimal cache timeout T of the two handover processes. From this figure, we observe that the optimal cache timeout T of Handover 2 is longer than that of handover process 1. Let T1 and T2 denote the optimal cache timeouts of Handover 1 and Handover 2, respectively. From Eqs. (19) and (25), we have λr T2 e−λr T2 + e−λr T2 = e−λr T1 ,
2
T /T
1
4
(26)
3 2.5 2 1.5 1 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
a
Fig. 8.
0
1
The ratio T2 /T1 when a varies.
cc − cd > 0, given the parameters including cr , cc , and λr , we can calculate the optimal cache timeout T from Eq. 25. We will now present numerical results to show the effect of the parameters and the handover processes on the optimal cache timeout T . For this purpose, we change the values of the parameters cr , cc , cd , cp , and ced (please see Table I for their definitions), and use Eqs. (20) and (25) to calculate the optimal cache timeout T . Since there are no specific data on the exact values of these costs, we simply treat them as parameters and study their effect on the optimal cache timeout. We believe that not knowing the specific values of these cost parameters is OK since we are not trying to calculate the specific value of the optimal cache timeout. Instead, we are interested in knowing how the optimal cache timeout vary as a function of these cost parameters. For the value of λr , we assume that it ranges from 0.0001 to 0.01 since it is reported that handovers occur once every few tens of seconds to several thousands of seconds [12]. To compare the optimal cache timeouts of Handover 1 and Handover 2, we note from Eqs. (19) and (25) that the right terms of the two equations are the same. Thus, we let ced δo −cc −cd a = cr +cc δr +c and calculate the optimal cache e d δr +cp δr +c δo d
which implies that the ratio of T2 over T1 (i.e., T2 /T1 ) is irrelevant to λr . Fig. 8 further shows T2 /T1 when a varies from 0.01 to 0.9 for the optimal cache timeout values shown in Fig. 7. From this figure, we observe that T2 /T1 is larger than 1, which implies that T2 is longer than T1 . In addition, we observe that T2 /T1 increases with the increase of a because, when a increases, T2 is required to increase faster than T1 so that the left terms of Eqs. (19) and (25) equal to the same a. Fig. 9 plots the optimal cache timeout T by changing the average handover rate λr . From this figure, we first observe that the optimal cache timeout T increases with the decrease of λr . For example, when a = 0.1, the optimal cache timeout T is 230 seconds when λr = 0.01. However, when λr = 0.001, the optimal cache timeout T increases to 2,300 seconds, a number ten times the optimal cache timeout when λr = 0.01. When λr further reduces to 0.0001, the optimal cache timeout T increases to 23,000 seconds, a number ten times the optimal cache timeout when λr = 0.001. These results imply that the optimal cache timeout T decreases inverse proportional to λr because, from Eq. (19), we know that λr T is a constant when a is fixed. Fig. 10 shows the effect of the average number of packets δr received during a map-resolution process on the optimal cache timeout T in handover process 1 when cr = ced = 1 and δo = 2 under different policies that an ITR deals with packets. In Policy 1 (i.e., cc = cp = 0), we set cd = 1. In Policy 2 (i.e., cd = cp = 0), we set cc = 1. And in Policy 3 (i.e., cc = cd = 0), we set cp = 1. From this figure, we observe that the optimal cache timeout T increases with the increase of δr . This is because, when δr increases, the cost used for caching packets or the cost of discarding packets increases. Thus, in order to reduce the total cost, the optimal cache timeout is expected to be longer so that the probability of cache misses
LUO et al.: OPTIMAL CACHE TIMEOUT FOR IDENTIFIER-TO-LOCATOR MAPPINGS WITH HANDOVERS
5
213
5
10
10
λr = 0.01 λr = 0.001
4
The optimal cache timeout T
The optimal cache timeout T
4
10
3
10
2
10
λ = 0.01
λr = 0.0001
10
3
10
2
10
r
λr = 0.001 λr = 0.0001 1
10
1
1
2
3
4
5
δ
6
7
8
9
10
10
1
2
3
r
4
5
δ
6
7
8
9
10
o
Fig. 10. The effect of the average number of packets δr received during a map-resolution process on the optimal cache timeout T in Handover 1.
Fig. 12. The effect of the average number of packet δo received during (td , tr ) on the optimal cache timeout T in Handover 1. 4
5
10
10
λ = 0.01 r
3
λr = 0.0001
10
The optimal cache timeout T
Th optimal cache timeout T (seconds)
r
λ = 0.001 4
3
10
2
10
10
2
10
1
10
λ = 0.01 r
λr = 0.001 λr = 0.0001 0
1
10
1
2
3
4 5 6 7 8 the cost of caching a packet at an ITR (cc)
9
10
10
1
2 3 4 5 6 7 8 9 The cost of discarding a packet caused by an outdated mapping (ce)
10
d
Fig. 11. The effect of the cost cc of caching a packet at an ITR on the optimal cache timeout T in Handover 1.
Fig. 13. The effect of the cost ced of discarding a packet due to outdated mappings on the optimal cache timeout T in Handover 1.
becomes smaller. In addition, we also observe that different Policies that an ITR uses to deal with packets have different impact on the optimal cache timeout T . However, since we do not know the exact cost of caching a packet and the cost of discarding a packet caused by a cache miss, we do not compare the three Policies any more in this paper. Fig. 11 plots the effect of the cost cc of caching a packet at an ITR on the optimal cache timeout T in handover process 1 when cr = ced = 1, δr = 1, and δo = 20. Note that we set cd = cp = 0 since packets are cached by an ITR only in Policy 2. From this figure, we can also observe that the optimal cache timeout T increases with the increase of cc . For example, when λr = 0.0001, the optimal cache timeout T is 2,451 seconds when cc = 2. When cc increases to 6, the optimal cache timeout T increases to 6,568 seconds. If we further increase cc to 9, the optimal cache timeout T then increases to 10,033 seconds. This is because a larger cc implies that the cost of caching a packet is higher, thus cache misses are undesirable. This leads to a longer optimal cache timeout T . Note that, from Eq. (20), the effect of cd on the optimal cache timeout T is similar to that of cc . Fig. 12 shows the effect of the average number of packet δo received during (td , tr ) on the optimal cache timeout T in
handover process 1, when cc = cd = 0, cr = cp = ced = 1 and δr is 1. From this figure, we observe that the optimal cache timeout T decreases when δo increases. For example, when δo = 1 and λr = 0.01, the optimal cache timeout T is 105 seconds. When δo increases to 3, the optimal cache timeout reduces to 50 seconds. If δo further increases to 9, the optimal cache timeout reduces to 10 seconds. This is because, when other parameters are fixed, the cost caused by discarded packets due to an outdated mapping is mainly determined by δo . Therefore, in order to reduce the total cost, the probability that a cached mapping is outdated is anticipated to be smaller. Thus the optimal cache timeout T reduces when δo increases. Fig. 13 further shows the effect of the cost ced of discarding a packet due to outdated mappings on the optimal cache timeout T in handover process 1 when cr = cp = 1, cc = cd = 0, δr = 1 and δo = 2. As shown, the optimal cache timeout T decreases with the increase of ced . For example, when λr = 0.001 and ced = 2, the optimal cache timeout T is about 405 seconds. When ced increases to 3, the optimal cache timeout T reduces to about 287 seconds. If we further increase ced to 9, the optimal cache timeout T reduces to about 105 seconds. This is because a larger ced implies higher cost of discarding a packets caused by outdated mappings. Thus we desire that the
214
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 10, NO. 2, JUNE 2013
5
10
parameters on the optimal cache timeout T in handover process 1, we make similar observations from results for handover process 2 that are not shown here due to space limitation.
the optimal cache timeout T
4
10
VII. D ISCUSSIONS
3
10
In this section, we first discuss possible usage scenarios of our results. We then discuss the effect of cache size. 2
λr = 0.01
10
λ = 0.001
A. Possible Usage Scenarios
r
λr = 0.0001 1
10
1
2
3
4 5 6 7 the cost of mapping resolution (cr)
8
9
10
Fig. 14. The effect of the cost of map-resolution on the optimal cache timeout T in Handover 1. 5
The optimal cace timeout T (seconds)
10
4
10
λ = 0.01 r
λ = 0.001 r
λr = 0.0001
3
10
2
10
1
2 3 4 5 6 7 8 9 The cost of processing a packet by the mapping servers (c )
10
p
Fig. 15. The effect of the cost of processing a packet by the mapping servers on the optimal cache timeout T in Handover 1.
probability that a cached mapping is outdated is lower, which in turn requires that the optimal cache timeout T is smaller. Fig. 14 shows the effect of the cost (cr ) of a map-resolution on the optimal cache timeout T in handover process 1 when cc = cd = 0, cp = ced = 1, and δr = δo = 2. From this figure, we observe that the optimal cache timeout T increases with the increase of cr . For example, when λr = 0.0001, the optimal cache timeout T is 12,528 seconds when cr = 3. When cr increases to 6 and 10, the optimal cache timeout T increases to 16,094 and 19,459, respectively. This is because when cr increases, the cost of a map-resolution increases. Thus we anticipate that the optimal cache timeout T increases in order to reduce the number of map-resolutions. Fig. 15 shows the effect of the cost cp of processing a packet by the mapping servers on the optimal cache timeout T in handover process 1 when cc = cd = 0, cr = ced = 1, and δr = δo = 1. From this figure, it is clear that the optimal cache timeout T increases with the increase of cp . This can be explained as follows. When cp increases, the cost of processing packets by mapping servers increases. Thus it is desirable that the optimal cache timeout T increases in order to reduce the number of cache misses, thus reducing the total cost. Notice that, while we only show the effects of various
First, in order to determine the number of mapping servers in a mapping system, it is required to know the average number of map-requests sent to a mapping system per identifier per second. For a given identifier-to-locator mapping, one can use Eq. (2) to calculate P rm (T ), use Eqs. (6) and (9) to calculate P ro (T ) in handover process 1 and handover process 2, respectively. Accordingly, we can calculate the average number of map-requests sent to a mapping system per identifier per second. Below we take handover process 1 with Policy 1 (or Policy 2) as an example, where the average number of map-requests sent to a mapping system per identifier per second is given by P rm (T ) + P ro (T ). Note that if Policy 3) is used, the average number of map-requests sent to a mapping system per identifer per second is given by (1 + δr )P rm (T ) + δr P ro (T ). We assume that Δti follows the well-known Pareto distribution with shape parameter β and location parameter α since it is widely recognized that Internet traffic can be modeled by as such [28]. The location parameter α corresponds to the minimal interval of two consecutive packets that are destined to the same identifier and received by the same ITR. On the other hand, the shape parameter β defines the shape of the distribution of packet intervals. For Internet traffic, α ranges from several nanoseconds to several tens of microseconds and β ranges from 0.9 to 0.95 [28]. The Pareto distribution has the cumulative distribution function: Fc (t) = 1 − (α/t)β , α, β ≥ 0, t ≥ α
(27)
with the corresponding probability density function: fc (t) = βαβ t−β−1
(28)
If β ≤ 2, the distribution has infinite variance, and if β ≤ 1, it has infinite mean. From Eq. (2), we know that P rm (T ) is given by P rm (T ) = 1 − Fc (T ) = 1 − (1 − (α/T )β ) = (α/T )β . (29) From Eq. (6), we have T [Fc (T ) − Fc (t)][1 − Fr (t)]dt P ro (T ) = λr 0 T [1 − (α/T )β − (1 − (α/t)β )][1 − (1 − e−λr t )]dt = λr 0 T [(α/t)β − (α/T )β )]e−λr t dt = λr 0 T T = λr (α/t)β e−λr t dt − λr (α/T )β e−λr t dt. (30) 0
0
215
0
The average number of map−request per identifier per second
10
λr = 0.001 λr = 0.0001
−1
10
λr = 0.01
−2
10
−3
10
−4
10
0
1000
2000
3000
4000 5000 6000 cache timeout T
7000
8000
9000 10000
Fig. 16. The effect of λr on the average number of map-requests per identifier per second in Handover 1.
The first term of Eq. (30) is given by T T λr (α/t)β e−λr t dt = λr β+1 αβ (λr t)−β e−λr t dt 0 0 λr T = (λr α)β (τ )−β e−τ dτ (let τ = λr t) 0 λr T (τ )(1−β)−1 e−τ dτ = (λr α)β ν(1 − β, λr T ), = (λr α)β 0
x where ν(γ, x) = 0 τ γ−1 e−τ dτ . The second term of Eq. (30) is given by T β λr (α/T ) e−λr t dt = (α/T )β (1 − e−λr T ). 0
Thus, the average number of map-requests sent to a mapping system per identifier per second is given by P rm (T ) + P ro (T ) = (α/T )β +(λr α)β ν(1 − β, λr T ) − (α/T )β (1 − e−λr T ) = (α/T )β e−λr T + (λr α)β ν(1 − β, λr T ).
(31)
Given α and β, we can obtain the average number of maprequests sent to a mapping system per identifier per second by using Eq. (31). Fig. 16 shows the average number of maprequests per identifier per second in handover process 1 when α = 0.1 and β = 0.95. From this figure, we observe that the average number of map-requests per identifier per second firstly decreases significantly with the increase of T . When T increases to some point, however, the reduction in the average number of map-requests per identifier per second becomes marginal. For example, when λr = 0.0001, the average number of map-requests per identifier per second reduces very slowly when T is larger than 3,000 seconds. This implies that the benefit from very large T for identifier-to-locator mappings is marginal. We also observe that λr has significant impact on the average number of map-requests per identifier per second. For example, the average number of map-requests per identifier per second when λr = 0.001 is about ten times that when λr = 0.0001. Fig. 17 plots the effect of β on the average number of maprequests per identifier per second in handover process 1 when α = 0.1. From this figure, we observe that β has very little
0
10
λ = 0.001, β = 0.91 r
λr = 0.001, β = 0.93 λ = 0.001, β = 0.95
−1
r
10
λ = 0.0001, β = 0.91 r
λ = 0.0001, β = 0.93 r
λr = 0.0001, β = 0.95
−2
λ = 0.001 r
10
λ = 0.0001
−3
r
10
−4
10
0
1000
2000
3000 4000 5000 6000 7000 Cache timeout T (seconds)
8000
9000 10000
Fig. 17. The effect of β on the average number of map-requests per identifier per second in Handover 1. The average number of map−requests per identifier per second
The average number map−requests per identifier per second
LUO et al.: OPTIMAL CACHE TIMEOUT FOR IDENTIFIER-TO-LOCATOR MAPPINGS WITH HANDOVERS
0
10
−1
λr = 0.001, α = 0.1
10
λ = 0.001, α = 0.01 r
λ = 0.0001, α = 0.1 r
−2
10
λ = 0.0001, α = 0.01 r
−3
10
−4
10
−5
10
0
1000
2000
3000 4000 5000 6000 7000 Cache timeout T (seconds)
8000
9000 10000
Fig. 18. The effect of α on the average number of map-requests per identifier per second in Handover 1.
effect on the average number of map-requests per identifier per second, since the curves for different β values are very close when λr and α is fixed. In addition, we also observe that the average number of map-requests per identifier per second reduces slowly when T reaches some point, even if β varies. Fig. 18 shows the effect of α on the average number of maprequests per identifier per second in handover process 1 when β = 0.95. From this figure, we observe that α has significant impact on the average number of map-requests per identifier per second. For example, if λr is 0.001, the average number of map-requests per identifier per second when α = 0.1 is almost ten times that when α = 0.01. Similar observations can also be made if λr = 0.0001. This is because α is the location of a Pareto distribution and, in our case, the minimal interval of two consecutive packets that are destined to the same identifier and received by the same ITR. Accordingly, a larger α leads to a higher probability of cache misses. Note that, based on the observations from Fig. 9 and Fig. 16, we can conclude that the cache timeout has a significant impact on the average number of map-requests per second. For example, from Fig. 9, we observed that the optimal cache timeout is about 230 seconds when λr is 0.01 and α = 0.1. From Fig. 16, we know that the corresponding average number of map-requests per second is about 0.003. If we set the cache
216
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, VOL. 10, NO. 2, JUNE 2013
timeout to be 60 seconds, this number will increase to 0.005. Therefore, it is necessary to optimize the cache timeout, which motivates our research in this paper. Second, as stated above, a larger λr leads to a smaller average number of map-requests per identifier per second. This motivates us to design indirect mapping mechanisms to reduce the average number of map-requests and to make the cache timeout longer [19]. Third, our results can be used to optimize the cache timeout for identifier-to-locator mappings. Note that every time a MN changes its point-of-attachment, its identifier-to-locator mapping is updated to the mapping server storing the mapping. Thus the mapping server can know the average handover rate (i.e., λr ) of the identifier. Accordingly, when an ITR resolves an identifier-to-locator mapping for an identifier, it can obtain λr and calculate the optimal cache timeout for the identifier. It is worth noting that, we have analyzed in [12] the intervals of two consecutive handovers of mobile users by using real data collected from buses, taxis, and pedestrians. We have found that, in most cases (about 90%), the interval between two consecutive handovers experienced by a mobile host is less than 4,800 seconds, which corresponds to a λr of at least 0.00021 and implies handovers are fair frequent. Accordingly, the optimal cache timeout is less than 10,000 seconds (see Figs. 16 - 18) in general networks.
dynamic traffic, the cache size would be around 300M bytes, which will still be within the off-the-shelf technology. In summary, these results show that our assumption is reasonable. VIII. C ONCLUSIONS In this paper, we have addressed the problem of determining the optimal cache timeout for identifier-to-locator mappings in networks with identifier/locator separation. We have derived mathematical models to calculate the optimal cache timeout and have showed that the optimal cache timeout is closely related to the employed handover process. Given a handover process, we have showed that the optimal cache timeout for an identifier-to-locator mapping of an identifier is mainly determined by the average handover rate of the identifier. We have also discussed some possible usage scenarios of our results. We believe that these results are very useful for the design of networks with identifier/locator separation. ACKNOWLEDGMENTS We thank the associate editor and the anonymous reviewers for their invaluable comments that improve the paper. R EFERENCES
B. The Effect of Cache Size In the above analysis, we have assumed that the cache size at an ITR is sufficiently large such that with the given range of TTL, the effect of cache overflow is negligible. Here, we briefly discuss the validity of the above assumption by assuming that the number of mapping items for static traffic and that for mobile traffic are the same if the cache timeout is the same for both static traffic and mobile traffic. Firstly, with advances in storage technologies, the storage capacity of routers is getting larger at a lower cost. For example, the International Roadmap for Semiconductors (ITRS) predicts that the capacity of dynamic random access memory (DRAM) in 2020 will be 46.5 times that in 2007, and the capacity of static random access memory (SRAM) in 2020 will be 17.7 times that in 2007 [29]. Secondly, this assumption does not mean that we store the complete identifier-to-locator mappings of the whole network at every ITR [17]. On the contrary, new mapping entries are only inserted on-demand (triggered by packet arrival) and old entries are purged after their TTL expires. Indeed, it is reported that the large majority of the mappings have a lifetime slightly higher than the cache timeout value [9]. In addition, even if the cache timeout is as long as 300 minutes, the number of different mappings in a map-cache is about 100,000, corresponding to a cache size of 3,127 Kbytes [9]. Note that although these numbers are for static traffic, we expect that the number of mappings in a mapcache for dynamic traffic is not much larger, and certainly well within a couple of orders of magnitude times of that for static traffic. In addition, as stated previously, the optimal cache timeout is less than 10,000 seconds (i.e., shorter than 300 minutes). Therefore, even with a 100-fold increase due to
[1] D. Meyer, L. Zhang, and K. Fall, “Report from the IAB workshop on routing and addressing,” IETF RFC 4984, Sep. 2007. [2] S. Paul, J. Pan, and R. Jain, “Architectures for the future networks and the next generation Internet: a survey,” Computer Commun. (Elsevier), vol. 34, no. 1, pp. 2–42, Jan. 2011. [3] D. Farinacci, V. Fuller, D. Meyer, and D. Lewis, “Locator/ID separation protocol (LISP),” IETF draft, draft-ietf-lisp-23.txt (work in progress), May 2012. [4] T. Li, “Recommendation for a routing architecture,” IETF RFC 6115, Feb. 2011. [5] F. Coras, J. Domingo-Pascual, and A. Cabellos-Aparicio, “An analytical model for the LISP cache size,” in Proc. 2012 IFIP Networking, vol. 1, pp. 409–420. [6] M. Ohmori, K. Okamura, H. Hayakawa, and F. Tanizaki, “LISP mapping resolution impacts on intiating bidirectional end-to-end communications,” in Proc. 2011 Asia-Pacific Advanced Networking Meeting, pp. 63–70. [7] L. Jakab, A. Cabellos-Aparicio, F. Coras, D. Saucez, and O. Bonaventure, “LISP-TREE: a DNS hierarchy to support the LISP mapping system,” IEEE J. Sel. Areas Commun., vol. 28, no. 8, Oct. 2010, pp. 1332–1343. [8] J. Kim, L. Iannone, and A. Feldmann, “A deep dive into the LISP cache and what ISPs should know about it,” in Proc. 2012 IFIP Networking, vol. 1, pp. 367–378. [9] L. Lannone and O. Bonaventure, “On the cost of caching locator/ID mappings,” in Proc. 2007 ACM CoNEXT. [10] “Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2011C2016,” CISCO white paper, Feb. 2012. Available: http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ ns537/ns705/ns827/white paper c11-520862.pdf. [11] D. Farinacci, D. Lewis, D. Meyer, and C. White, “LISP Mobile node,” IETF Draft, draft-meyer-lisp-mn-07.txt (work in progress), Apr. 2012. [12] H. Luo, H. Zhang, and C. Qiao, “How fast do identifier-to-locator mappings change in networks with identifier/locator separation?” in Proc. 2010 International Workshop on The Network of The Future. [13] V. Fuller, D. Farinacci, D. Meyer, and D. Lewis, “LISP alternative topology (LISP-ALT),” IETF Draft, draft-ietf-lisp-alt-10.txt, Dec. 2011. [14] H. Luo, H. Zhang, and M. Zukerman, “Decoupling the design of identifier-to-locator mapping services from identifiers,” Computer Networks, vol. 55, no. 4, Mar. 2011, pp. 959–974. [15] M. Menth, M. Hartmann, and M. Hofling, “FIRMS: a mapping system for future Internet routing,” IEEE J. Sel. Areas Commun., vol. 28, no. 8, pp. 1326–1331, Oct. 2010.
LUO et al.: OPTIMAL CACHE TIMEOUT FOR IDENTIFIER-TO-LOCATOR MAPPINGS WITH HANDOVERS
[16] H. Luo, Y. Qin, and H. Zhang, “A DHT-based identifier-to-locator mapping approach for a scalable Internet,” IEEE Trans. Parallel Distrib. Syst., vol. 20, no. 12, pp. 1790–1802, Dec. 2009. [17] D. Jen and L. Zhang, “Understand mapping,” IETF Draft, draft-jenmapping-00.txt, July 2009. [18] H. Zhang, M. Chen, and Y. Zhu, “Evaluating the performance on ID/Loc mapping,” in Proc. 2008 IEEE GLOBECOM. [19] H. Luo, H. Zhang, and C. Qiao, “Efficient mobility support by indirect mapping in networks with locator/identifier separation,” IEEE Trans. Veh. Technol., vol. 60, no. 5, pp. 2265–2279, June 2011. [20] F. Qiu, X. Li, and H. Zhang, “Mobility management in identifier/locator split networks,” Wireless Personal Commun., DOI: 10.1007/s11277011-0269-8. [21] P. Dong, J. Chen, and H. Zhang, “A network-based localized mobility approach for locator/ID separation protocol,” IEICE Trans. Commun., vol. E94-B, no. 6, pp. 1536–1545, June 2011. [22] P. Mockapetris, “Domain names–concepts and facilities,” IETF RFC 1034, Nov. 1987. [23] D. Johnson, C. Perkins, and J. Arkko, “Mobility support in IPv6,” IETF RFC 3775, June 2004. [24] H. Soliman, C. Castelluccia, K. Elmalki, and L. Bellier, “Hierarchical Mobile IPv6 Mobility Management (HMIPv6),” IETF RFC 5380, Oct. 2008. [25] Y.-B. Lin and A. Noerpel, “Implicit deregistration in a PCS network,” IEEE Trans. Veh. Technol., vol. 43, no. 4, pp. 1006–1010, Nov. 1994. [26] Y. Fang, I. Chlamatac, and Y.-B. Lin, “Portable movement modeling for PCS networks,” IEEE Trans. Veh. Technol., vol. 49, no. 4, pp. 1356–1463, July 2000. [27] L. Kleinrock, Queueing Systems: Theory, Volume I. Wiley, 1975. [28] V. Paxson and S. Floyd, “Wide area traffic: the failure of Poisson modeling,” IEEE/ACM Trans. Networking, vol. 3, no. 3, pp. 226–244, June 1995. [29] ITRS international technology roadmap for semiconductors, 2006. Hongbin Luo is a full professor at the School of Electronic and Information Engineering, Beijing Jiaotong University. From Sep. 2009 - Sep. 2010, he was a visiting scholar at the Department of Computer Science, Purdue University. He has authored more than 40 peer-reviewed papers in leading journals (such as IEEE/ACM T RANSACTIONS ON N ETWORKING , IEEE J OURNAL ON S ELECTED A REAS IN C OMMUNICATIONS) and conference proceedings. His research interests are in the wide areas of network technologies including routing, Internet architecture, and optical networking. He is an editor of IEEE C OMMUNICA TIONS L ETTERS and a Technical Program Committee (TPC) member of IEEE HPSR’2013, ITC’2013, and AINA’2013. He has served as a TPC member of IEEE GLOBECOM’07 - 12, IEEE ICC’ 07 - 12, and many other conferences.
217
Hongke Zhang received his M.S. and Ph.D. degrees in Electrical and Communication Systems from the University of Electronic Science and Technology of China (formerly known as Chengdu Institute of Radio Engineering) in 1988 and 1992, respectively. From Sep. 1992 to June 1994, he was a post-doc research associate at Beijing Jiaotong University (formerly known as Northern Jiaotong University). In July 1994, he joined Beijing Jiaotong University, where he is a professor and dean of the School of Electronic and Information Engineering He has published more than 100 research papers in the areas of communications, computer networks and information theory. He is the author of eight books written in Chinese. Dr. Zhang is the recipient of various awards including the Zan Tianyou Science and Technology improvement award (2001), the Mao Yisheng Science and Technology improvement award (2003), the first class Science and Technology improvement Award of the Beijing government (2005). He is also the Chief Scientist of a National Basic Research Program of China. Professor Chunming Qiao directs the Lab for Advanced Network Design, Analysis, and Research (LANDER), which conducts cutting-edge research with current foci on optical networking and survivability issues in cloud computing, human-factors and mobility in wireless networks, low-cost and lowpower sensors and mobile sensor networking. He has published about 100 and 150 papers in leading technical journals and conference proceedings, respectively, with an h-index of about 50 (according to Google Scholar). His pioneering research on Optical Internet in mid 1990, in particular, the optical burst switching (OBS) paradigm has produced some of the highest cited works. In addition, his work on integrated cellular and ad hoc relaying systems (iCAR), started in 1999, is recognized as the harbinger for today’s push toward the convergence between heterogeneous wireless technologies, and has been featured in BusinessWeek and Wireless Europe, as well as at the websites of New Scientists and CBC. His Research has been funded by nine NSF grants as a PI including two ITR awards, and by eight major telecommunications companies, as well as Industrial Technology Research Institute (in Taiwan). Dr. Qiao has given a dozen of keynotes, and numerous invited talks on the above research topics. He has chaired and co-chaired a dozen of international conferences and workshops. He was an editor of a couple of IEEE transactions and a guest-editor for several IEEE J OURNAL ON S ELECTED A REAS IN C OMMUNICATIONS (JSAC) issues. He was the chair of the IEEE Technical Committee on High Speed Networks (HSN) and currently chairs the IEEE Subcommittee on Integrated Fiber and Wireless Technologies (FiWi) which he founded. He was elected to IEEE Fellow for his contributions to optical and wireless network architectures and protocols.