H N4: Packet Forwarding on Hierarchical Hash-based Names for ...

44 downloads 24981 Views 418KB Size Report
2We note that the number of registered Internet domain names has already exceeded 284 ... best interface(s) over which to forward the Interest. On the reverse ...
H2N4: Packet Forwarding on Hierarchical Hash-based Names for Content Centric Networks Aytac Azgin, Ravishankar Ravindran, Guoqiang Wang Huawei Research Center, Santa Clara, CA, USA {aytac.azgin,ravi.ravindran,gq.wang}@huawei.com

Abstract—In this paper, we propose a novel solution to scalable forwarding problem in content centric networks (CCN) that relies on the use of hash-based content names for packet forwarding. Proposed approach continues the hierarchical naming format of CCN by converting human readable name prefixes to their hashbased counterparts to enable faster packet processing at the content routers, with additional proposed modifications at the edges to support hash-based forwarding. We present an in-depth study of the proposed architecture using numerical analysis that rely on actual datasets and realistic performance measures. Our results suggest significant improvements in storage requirements and processing overhead with the proposed architecture to support higher line rates and more efficient resource usage in the network.

I. I NTRODUCTION Information centric networking (ICN) proposes to shift the Internet architecture towards a content-centric design by making data the principle entity for communication and integrating in-network caching to content delivery [1], [2]. Many challenges are still ahead to realize the ICN objectives in a practical setting. We can list entity (i.e., host or content) mobility, packet forwarding, security, and caching as classic examples for these challenges. In this paper, we address the concerns related to forwarding, and propose a simple yet effective solution to improve the forwarding performance in ICN, with specific emphasis on the performance of content centric networking (CCN [3], or named data networking, NDN [4]).1 CCN assumes hierarchical human readable names to identify the entities, e.g., /domain/user/content, and relies on stateful name-based forwarding to pull content from the content producers (referred simply as the Producer) using specific request messages (referred as Interests) [5]. Each CCN router consists of three essential components, (i) Content Store (or CS) to store the cached Data packets, (ii) Pending Interest Table (or PIT) to store information on active content (or pending) requests (such as name, nonce–a 32-bit long random value–, timeout values, and incoming/outgoing interfaces), and (iii) Forwarding Information Base (or FIB) to store name to outgoing interface mappings with corresponding forwarding metrics. One of the inherent drawbacks to forwarding in CCN is the use of theoretically unbounded names (consisting of any number of components, carrying any number of characters) to forward the Interest packets. Naming convention implemented by CCN can easily lead to an overall performance degradation 1 Hereafter,

we use CCN to refer to either of CCN/NDN.

in forwarding due to increased transmission overhead (which directly depends on the length of content names), extensive use of processing resources, and additional storage requirements. For instance, since CCN assumes stateful forwarding, each new content request triggers an update on the PIT by inputting a new entry (or updating a previous entry) based on the received Interest. Each PIT entry consists of, at the least, three components: content name, nonce values, and interface lists. Here, nonce values are used to identify unique Interests, as each Interest carries a randomly assigned nonce value. Interface lists consist of the incoming interface information for the received pending Interests (to support aggregation), and outgoing interface information for the forwarded Interests (to support multipath forwarding or to enable quick recovery by sending Interests over different interfaces). Hence, longer content names increase the storage requirements, which can limit the use of faster memory resources to store the PIT entries, thereby limiting the Interest processing rate. Furthermore, as the forwarding process typically involves the use of longest prefix matching (LPM) to match the content name to an outgoing interface, using longer names with higher number of components requires an increased number of prefix look-ups performed on each received Interest. As the number of prefixes in an information centric network can scale to billions in range (e.g., to uniquely represent each content, user entity, or host device), we can expect the number of FIB entries to easily reach the level of hundreds of millions.2 With these many entries, and the size limitations on the fast access memory types (such as S RAMs or R L - DRAMs), it becomes difficult to implement solutions that can limit search space during LPM effectively in the hardware to ease up the processing load associated with forwarding. For instance, Bloom filters are known to be effective in reducing the number of forwarding lookups, however, their efficiency decreases (i) as the number of entries increase, due to moving towards larger but slower memory components to preserve a targeted false positive ratio (or leading to higher false positive ratio with constant memory use), and/or (ii) as the number of name components increases, due to an increase in false positives. Additionally, if we use software programmable routers, it becomes necessary to implement hash tables with fast lookup features to efficiently store the FIB entries. However, such process still requires the use of component-based hashing on each received Interest, with a direct impact on the perceived forwarding performance. In this paper, we introduce a novel hash-based forward2 We note that the number of registered Internet domain names has already exceeded 284 million [6].

ing solution that bypasses CCN’s forwarding limitations by converting content names to their hash-driven counterparts to support resource efficient forwarding. We propose modifications to architectural components of CCN to support hashbased forwarding and demonstrate noticeable improvements in network overhead and forwarding look-up performance. In short, we show that the proposed architecture can significantly improve the network capacity to support higher number of hosts, devices, or entities. The rest of the paper is organized as follows. We give a brief overview of CCN’s name based forwarding in Section II. We present the proposed architecture in Section III, and explain the forwarding operation in Section IV. We demonstrate the performance of our solution in Section V. We address security and naming related concerns in Section VI before concluding our paper in Section VII. II. PACKET F ORWARDING IN CCN Figure 1 illustrates the simplified forwarding operation in a content router. After an Interest is received by the content router, received message is first parsed to extract the content name, which is then used to verify the existence of a cached Data packet matching the request. If a match is found in the CS, request is dropped and a matching Data packet is forwarded along the reverse path through the interface, on which the Interest is received. If no match is found in the CS, content name is used to search for an existing entry in the PIT. (

2 # +

* " #

# * "

0 # #

*1

0 $

2 .

Fig. 1.

# *1 0 # #

* "

, # *1

#

)

4 )

3 *1 .

&

) '

*1 )& # 4 *, , # 4

%

)

Forwarding lookup operations performed at a CCN router.

If an existing entry is found in the PIT, nonce values are checked to see whether the request is a duplicate. If that is the case, request is dropped without performing any update. Otherwise, nonce and interface lists are updated with information on the received request, before dropping the Interest. If no matching entry exists in the PIT, routeable portion of the content name is used to search for the longest prefix match (LPM) entry in the FIB. If no match is found in the FIB, Interest is dropped. Otherwise, the strategy layer selects the best interface(s) over which to forward the Interest. On the reverse path (from Producer to Consumer), after a Data packet is received, an initial check is performed at the CS to see whether the Data packet is already cached or not. If the packet is cached, it is dropped. Otherwise, entry search is performed on the PIT to find a matching request. If no match is found, the packet is dropped. Otherwise, content router forwards the Data packet along the matching incoming interfaces (which are extracted from the PIT entry), deletes the existing PIT entry, and caches the Data packet.

III. S YSTEM M ODEL A. Motivation We address the performance trade-offs in CCN forwarding using iteratively formatted hash-based names by replacing hierarchically structured human readable names with hierarchically structured hash components. Our proposed solution improves the forwarding performance as follows: § First, by using hash-based components that are shorter in length with respect to their human readable counterparts, we reduce the transmission overhead. § Second, by integrating forwarding hash values within the hash-based components, we minimize the need on prefix hashing to identify the existing PIT/FIB entries. Doing so allows the system to significantly reduce the processing overhead associated with forwarding, as content name hashing is no longer required by default. § Third, by storing FIB, PIT, and CS entries using the integrated hash values instead of the original content names, we can also achieve significant reduction in storage requirements. From an architectural point of view, depending on the final requirements on size, doing so allows the use of smaller memory components to store these entries, for instance, by replacing D RAM-based (or R L - DRAM-based) components with R L - DRAM-based (S RAM-based) components, thereby potentially leading to shorter access times to acquire the entries. § Lastly, the resulting hash-based names preserve the hierarchical structure of CCN, while incorporating cryptographic features directly into the content name, providing additional security to DOS attacks. B. Forwarding Architecture We illustrate the proposed architecture in Figure 2 from the point of view of a single Consumer–Producer pair. To support hash-based forwarding, our architecture assumes the presence of a name resolution system consisting of distributed name registration/resolution servers (NRserv s) that provide the necessary name-to-hash mappings (NHmap s) to the content routers. The figure also illustrates the forwarding operation in our architecture, consisting of the more general Registration and Forwarding phases, and the more specific Forwarding Look-up phase, which are explained shortly. To transform content name into its hash-based counterpart, we use the following procedure. Let us assume that we are given a k-component long content name, represented with C |k| and expanded as /c1 /c2 / · · · /ck , where ci represents the ith name component. We can represent the ith level prefix of C |k| using Ci , i.e., Ci = /c1 / · · · /ci . We propose the use of a cryptographic hash function H, that is shared by the global name resolution system, to transform Ck to /h1 /h2 / · · · /hk , where hi is a function of H(Ci ). We initially assume H as a 64-bit cryptographic hash function. However, we use the complete output for H only for C1 , whereas we use 32-bit shortened version of H’s output for {Ci } (i.e., H∗ ), where i > 1, to allow for sufficient protection against hash collisions and security threats, while at the same time reducing the storage size requirements. Also note that, we do not put any restrictions on the distribution of hashes or hash functions to different component groups, to allow for future flexibility

ς4. Consumer requests content with its original name

NRserv

ς5. Service Point requests prefix to hash mapping from Name Server

NRserv

ς6. Name Server responds with the hash mapping

ς2. Name Server (NRserv) responds with hash to name mapping ς3. Hash to Prefix mapping is stored at the Service Point (SP)

Consumer 4

ς1. Publisher sends registration message

Hash Forwarding Cloud

1

SP 3, 9, 12

SP 7, 14

Publisher ς7. Service Point creates a reverse mapping entry for the name/hash couple ς8. Service Point forwards the request with the Hash ς9,ς10. Service Point maps the hash to name and forwards the request using the name

Fig. 2.

64 Bytes

FH

ptr

ptr

...

T

KPIT Bytes

Nonces Interfaces Timeout value

Basic PIT design.

64-bit pointer to next FIB entry

...

Linked list

FH

ptr

Content name

ptr

...

...

... 64 Bytes

32-bit Hash + 32-bit index

KFIB Bytes

Mod(index, #buckets)

...

...

Mod(32-bit Hash,#buckets)

ptr

...

32-bit Hash + 32-bit index

...

Mod(32-bit Hash,#buckets)

... ...

Mod(index, #buckets)

...

...

64-bit pointer to next PIT entry

Content name

Fig. 4.

ς12,ς13. Service Point replaces the content name with hash value and forwards the Data along the reverse path ς14,ς15. Service Point replaces the hash with the stored name and forwards the Data packet

Overlay content delivery scenario using hash based forwarding.

on such assignments or the use of short/long and full/partial hashes on different component groups. To generate the 32-bit hash values, we can simply sum the prime multiplied 32-bit halves of the original 64-bit hash output.

Fig. 3.

ς11. Publisher responds with the Data packet

Interface information

Basic FIB design.

We illustrate the designs for PIT and FIB in Figures 3 and 4, respectively, which are based on Hash Tables (HTs), as they offer higher flexibility and provide better performance [7] compared to other possible approaches [8]. We adopt the compact array structure suggested in [9] for the HT buckets, where each bucket consists of 7 64-bit long PIT/FIB entries (consisting of 32-bit portion of the hash and 32-bit long index to the table storing the actual entries or pointers to them), and a pointer to a linked list, if needed, to store the dynamically allocated entries. The basic design implements a simplified approach (relying on successive 64-Byte long cache line reads) to store the minimum required elements

per PIT/FIB entry. Specifically, for the FIB, we store the full hash (FH), content name (which is the hash-based name for the proposed framework), and interface metrics (available interfaces, cost, etc.), whereas for the PIT, we store FH, content name, nonce list, interface list, and entry timeout. For the given designs, the minimum required size per entry in PIT (or FIB) index table is equal to KP IT = 280 bits (or KF IB = 192 bits). As a result, each PIT entry consumes, on average, 344 + L bits, whereas each FIB entry consumes, on average, 272+L bits, where L represents the average length for the content names.3 If the ratio of aggregatable Interests within the PIT are small, we can store nonce and interface values directly inside the index table. Furthermore, if the number of interfaces is small, we can use short bit-arrays to represent all the matching incoming and outgoing interfaces. For instance, two 16-bit arrays are sufficient to store the minimum interface information, if the number of interfaces is less than or equal to 16.

C. Forwarding Lookup The default PIT look-up procedure is based on a two-stage process, involving, at minimum, the use of 64-bit hash values, of which the initial 32-bit portion is used to identify the bucket location, and the latter 32-bit portion is used to verify the PIT entry.4 Content routers use the hash value on the maximum length content name to find the bucket, and any combination of smaller length component hashes for entry verification, for which the decision on which components to 3 The given totals are based on, for the PIT entries, a single 32-bit nonce value, 16-bit incoming/outgoing interface elements, and 32-bit timeout value, whereas for the FIB entries, 16-bit outgoing interface element. 4 If the 64-bit output from H is shortened to a 32-bit value also for the first name component, then the 64-bit hash value used during forwarding look-up is created by transforming H(C1 ) to a 32-bit forwarding hash, which is then used for entry verification.

choose and how is left to each content router.5 Doing so provides better security performance, as content routers are allowed to make their matching rules independently to prevent or limit the effectiveness of Interest collision attacks targeting a specific matching pattern (which can trigger additional lookups lowering the system performance), even when the routers utilize the same hash function by default (as the entries that fall to the same bucket will change from one content router to another). We can assume a similarly secure entry matching process for the FIB look-ups. Specifically, the default procedure requires content routers to utilize consecutive entries to perform longest prefix matching (LPM). For instance, if the requested content name has k components, to determine an outgoing interface, first LPM check uses the concatenated H(Ck ):H(Ck−1 ), second LPM check uses the concatenated H(Ck−1 ):H(Ck−2 ), etc. Our solution provides the flexibility for each content router to uniquely arrange the component hashes to create an entry at any level. Due to using unique hash values at each level and performing independent level hashing, the proposed solution preserves the FIB look-up efficiency of CCN with name based lookups (which is assumed to use 64-bit hash values on each component prefix) despite using concatenated shorter hash values. Lastly, we note that the default look-up enables quick and easy parsing of the component hash values using 32-bit size blocks, as no additional knowledge is required to identify different components other than the length of the hash-based name (or the number of components). IV. C ONTENT R ETRIEVAL In this section, we explain the procedures involved during registration and forwarding phases in our solution. A. Registration Phase During the Registration phase, Producer sends a content (or entity) registration message to its local NRserv, with the indication of whether the Producer supports hashing or not in its registered name space (ς1 in Figure 2).6 For that purpose, the registration message carries a single-bit hash status (HS) flag, where the default value of 0 suggests no hashing support, and a value of 1 suggests hashing support. After receiving the registration message, NRserv creates the NHmap (using globally shared or domain specific hash functions), and updates its local database with the mapping and the hash support information regarding the Producer. Next, NRserv creates an acknowledgement message, in response to Producer’s registration request, to deliver two pieces of information (ς2): NHmap for the registered name and the associated hash function (regardless of whether or not hashing is supported by the Producer). NRserv ’s response, which by default also includes the content name, is recorded at the matching Service Point/Router 5 The only exception to the above case is when look-ups are performed on single length component names, which are represented with a single hash value extracted from the Interest. 6 Support for hash-to-name or name-to-hash translation is a concern specifically aimed at the end users. We assume the core network to support hash based name forwarding, suggesting also support by the in-network caches.

(i.e., content router servicing the Producer) before being forwarded to the Producer (ς3), which, depending on its capabilities, either ignores the mapping or stores it accordingly. To make sure that the mapping is only stored at the router servicing the Producer, we can use a single-bit status-update (SU) flag within the registration message, which is set, by default, to 0 by the Producer, and set to 1 by the matching Service Router for the Producer, after receiving the registration message. Service Router also creates a temporary entry in its request database, to validate the response from the NRserv. After Producer receives the acknowledgement message from the NRserv, if hashing is locally supported by the Producer, then any content name registered under its name space is stored with the matching NHmap, so that the Producer can quickly respond to the received Interests by matching the received hash value with the stored content. B. Forwarding Phase The Forwarding phase initiates with the Consumer sending an Interest for a content that the Producer publishes (ς4). For the time being, let us assume that the content is not cached anywhere else in the network, so the Interest is forwarded to the Producer. After the Service Router at the Consumer side receives the Interest, it first checks its local NHmap database to identify an existing entry matching the requested content name, either a complete match at the content level, or a partial match at the host/device level. Assume that no matching entry is found at the Service Router, which then contacts its local NRserv to acquire the hash-based mapping for the requested content name (ς5). After the NRserv receives the mapping request, it searches for the content name within its local database (and communicate with a matching NRserv, if no entry is found) with the following information: (i) longest matching NHmap (for the given host or content name), (ii) associated hash function (to perform hashing on future requests targeting similar domains/hosts locally), and (iii) status of hash support (to see if the Producer has enabled the hash support). After the Service Router receives the NRserv ’s response (ς6), Service Router first updates its mapping database with the received NHmap (ς7) before forwarding the Interest (ς8): § If the received NHmap corresponds to the whole name, then the Service Router replaces the content name with its hash value, performs FIB look-up using the hash value, and forwards the Interest along the matching interface (as indicated by the FIB entry). § If the received NHmap corresponds to the partial name, and hash support is enabled at the Producer, Service Router determines the remaining hash components (using the provided Hash function), appends them to the received hash value, performs FIB look-up on the final hash value and forwards the Interest. § On the other hand, if hash support is not enabled at the Producer side, then the Service Router (in addition to calculating the remaining hash values using the received hash function) also includes the original name components not used by the Producer to publish its name space. In doing so, content routers can continue

V. P ERFORMANCE A NALYSIS In this section, we present a detailed analysis on the performance of the proposed forwarding architecture. We specifically focus on the proposed solution’s impact on storage requirements and processing overhead, and provide an estimate on its impact on network capacity.7 A. Storage Overhead We compare the storage requirements per entry for the default approach and the proposed approach in Table I, at varying Interest packet sizes of 60Bytes to 100Bytes, with approximately 9Bytes per component. We observe that the storage requirements decrease by 30-to-40% using the proposed approach.8 TABLE I.

AVERAGE STORAGE REQUIREMENTS PER PIT/FIB ENTRY

LInterest PIT(Default) PIT(Proposed) FIB(Default) FIB(Proposed)

60Bytes(±%) 99B(-) 71B(28.2%) 90B(-) 62B(31.1%)

80Bytes(±%) 119B(-) 79B(33.6%) 110B(-) 70B(36.4%)

100Bytes(±%) 139B(-) 87B(37.4%) 130B(-) 78B(40%)

Maximum number of entries

1E+07

Default(PIT) Proposed(PIT) ProposedOpt(PIT) Default(FIB) Proposed(FIB) ProposedOpt(FIB)

RL-DRAM

1E+06

SRAM

1E+05 144Mb

250Mb

1.15Gb

2Gb

Storage capacity

Fig. 5. Maximum number of PIT/FIB entries supported for a given memory type.

We illustrate the impact of allowing smaller PIT footprint with the proposed approach on supported line rates9 in Figures 6 and 7, which compare the performances of the default and proposed approaches under the fastest S RAM-based or the faster R L - DRAM based off-chip storage with 60Byte (or 1500Byte) sized Interest (or Data) packets.10 We observe significant improvement in the supported transmission link capacity with the proposed approach. Also note that, in reality, the perceived level of improvement is much higher due to the fact of switching to a larger-sized but slower memory component for the default approach (i.e., going from S RAM to R L - DRAM, or R L - DRAM to D RAM), which will be explained shortly. 30

Maximum supported line rate (Gbps)

to push forward the Interest packets using the provided hash values, while at the same time, enabling the Service Router at the Producer side to translate the forwarding hash value (within the received Interest) back to the original content name, before forwarding the Interest to the Producer. After the Service Router receives the Interest, if hash support is not enabled at the Producer, Service Router performs a reverse lookup on its local NHmap database using the hashbased name to determine the content name (ς9) before forwarding the Interest to the Producer (ς10). If, on the other hand, hash support is enabled at the Producer, then the Interest is forwarded as is to the Producer. After the Producer receives the Interest, it responds with a Data packet using the format matching the local requirements, i.e., using the original content name or its hash based counterpart. Data packet is then forwarded along the reverse path (ς13) until it reaches the Service Router, which then performs the reverse mapping on the received name (ς14) before forwarding the Data packet to the Consumer (ς15).

25

PIT(Default) PIT(Proposed) PIT(ProposedOpt)

20

15

10

5

0

We compare the maximum number of entries supported, for a given memory type, by the default approach and the proposed solution in Figure 5, for which the results also include the case of replacing the variable length name/hash with a constant sized (i.e., 128bit long) hash value in the best case scenario (i.e., scenario referred to as ProposedOpt with all unique requests, hence no aggregation is needed). 7 We

omit the discussion on transmission overhead, due to its lesser impact on overall system performance when compared to storage and processing overheads. Specifically, we observe that the proposed approach reduces the transmission overhead by ≈ 6.5% with Interest (or Data) packet size of 100B (1500B). 8 Results in Table I consider the storage requirements of each entry independently. Due to use of multiple entries per bucket (and the resulting dependency), if we assume a partially loaded HT with 4 entries per bucket, then we observe 2 − 2.5% loss in performance with the proposed solution (i.e., 26.2 − 35.4% decrease in PIT storage requirements, and 28.6 − 37.7% decrease in FIB storage requirements).

6/144Mb

8/144Mb

10/144Mb

6/250Mb

8/250Mb

10/250Mb

N/S (N:number of components, S:PIT size in Mbits)

Fig. 6. entries.

Impact of PIT design on the line rate using SRAM to store PIT

B. Processing Overhead To determine the impact of switching to a hash based name format on processing overhead, we performed a numerical analysis involving the major operations in CCN packet processing, such as hashing and lookups, which are relevant to the 9 We calculate the transmission capacity assuming a 250ms round trip time for the Interests, which requires the entries to be kept for ≈ 250ms in the PIT. 10 The presented results correspond to the theoretical maximum for a given memory type, which in practice may be lower due to processing limitations.

available IRCache traces to populate the FIB entries and create content requests. Following the suggested methodology, we determined κF IB = 3.45 (which is close to the number found in [9]). Consequently, we can find the processing overhead associated with FIB lookups using (2 × 109 × ρF IB × ∆F IB ), where ∆F IB typically equals 15ns or 55ns (as S RAM-based storage is considered to be insufficient to contain the expanded FIB table in CCN).

PIT (Default) PIT(Proposed) PIT(ProposedOpt)

200

150

100

50 1800

0 6/1.15Gb

8/1.15Gb

10/1.15Gb

6/2Gb

8/2Gb

10/2Gb

N/S (N:number of components, S:PIT size in Gbits)

Fig. 7. Impact of PIT design on the line rate using RL-DRAM to store PIT entries. TABLE II. Def ault P roposed

P OSSIBLE S CENARIOS ON T HE C HOICE OF M EMORY T YPE A

B

C

D

E

S RAM S RAM

R L - DRAM S RAM

R L - DRAM R L - DRAM

D RAM R L - DRAM

D RAM D RAM

Overall processing overhead per InterestData pair (pcs)

Maximum supported line rate (Gbps)

250

Default 1600

Proposed

1400 1200 1000 800 600 400 200 0 AC

BC

CC

DC

EC

AD

BD

CD

DD

ED

AE

BE

CE

DE

EE

Scenarios for memory choice (PIT/FIB)

differences between the default and the proposed solutions. For the sake of simplicity, let us assume that the requests find no match in the CS/PIT.11 Hence, we observe the following operations for Interest packet processing: parsing, full name hashing, CS/PIT lookup and miss, and prefix hashing and LPM-based FIB lookup. Similarly, for Data packet processing, we observe the following operations: parsing, full name hashing, CS/PIT lookup and hit. Based on our analysis of the results presented in [9] for an optimized NDN forwarder using 2GHz Xeon processors, we observe that the processing overhead associated with full name hashing (corresponding to the request dataset extracted from the IRCache trace files) can be approximated as 220 processing cycles (pcs).12 Our solution avoids this overhead, as no hashing is performed at the network core. For the CS/PIT lookup, ρP IT = 2-to-3 memory accesses are typically required (with 2 representing the best case scenario in our design). We can approximate the processing overhead associated with CS/PIT lookups using (2 × 109 × ρP IT × ∆P IT ), where ∆P IT represents the access time associated with the type of memory used to store PIT entries. The typical numbers for ∆P IT are given as follows: ∆P IT = 5ns for S RAMbased memory, ∆P IT = 15ns R L - DRAM-based memory, and ∆P IT = 55ns for D RAM-based memory [10]. For the FIB lookup, we require ρF IB = κF IB + 2 accesses to the memory, where κF IB represents the expected number of LPM accesses to find a matching entry, suggesting a processing overhead of ρF IB .13 As suggested in [9], we used the publicly 11 As suggested in [9], the main CS/PIT entries can be combined using a single bit flag to identify the type of entry, allowing to detect existing Data or PIT entry in one step. 12 Note that, hashing overhead depends on the prefix length and the hashing algorithm used, hence, in practice we observe much higher values, further emphasizing the impact of our solution on the overall performance. In [9], the authors used one hashing cycle to create all the component hashes, which is also the assumption used for the default approach, to measure its best case performance. 13 If constant length hashes are instead used to replace the variable content names in the FIB entry, we can reduce the memory accesses to (κF IB + 1).

Fig. 8. Processing overhead comparison between default and proposed solutions under various scenarios.

As the proposed solution reduces the storage requirements for both PIT and FIB significantly, depending on the implemented scenarios, we can utilize more efficient RAM modules to store their entries. We list the possible scenarios for assigning memory types to default and proposed solutions in Table II. Note that, since the proposed architecture leads to reduced storage requirements, it becomes possible to use smaller and faster memory components for the proposed architecture. In short, using these baseline scenarios, we determine the average processing overhead for both scenarios and present the corresponding results in Figure 8. We also show the percentile based improvements in processing overhead and rate of improvement in network capacity in Figure 9, for which the results are sorted to demonstrate the differential impact of the considered scenarios. We observe 26-to-80% decrease in processing overhead with the proposed solution, suggesting an up to 5.47 times capacity improvement in the best case scenario, which occurs when either or both tables utilize different memory components. The worst-case scenario (with 1.35 times capacity improvement) is observed when either tables are stored in D RAM-based memory modules, for both approaches. Also note that, for the scenarios listed in Table II, jointly evaluating the storage and processing limitations, we observe the proposed architecture to successfully support line rates of up to 74.85Gbps, whereas the default solution stays limited at 32.73Gbps. VI.

D ISCUSSIONS

A. Selector Usage The proposed architecture provides limited support for the selector use, as we assume caching to occur mostly at the network edges. Therefore, to achieve efficient forwarding, we limit the use of selectors to network edges rather than the core.

6

Processing 80

Capacity

5

70 4

60 50

3 40 2

30 20

1 10

Rate of improvement in network capacity

Percentile improvement in processing overhead (%)

90

0

0 EE

EC

CE

AE

BE

ED

DE

CC

AC

BC

CD

DC

AD

DD

BD

Scenarios for memory choice (PIT/FIB)

Fig. 9. Impact of switching to a hash-based framework on processing overhead.

Furthermore, to enable the use of selectors on hash-based names, additional modifications are required on the proposed architecture. For instance, to support child selectors on the hash-based components, we can create the full name hash for the n components (representing the last hash component for hash-based name), by combining the original nth component with the hash of the n − 1 component prefix, thereby allowing the transmission of data packets with partial hashes. Or, another possibility would be implementing order or locality preserving hashing for the last component (or set of components, as in version/segment) to support selector features. Selectors based on the min-max suffix components can be handled without additional changes (due to preserving the hierarchical naming format), if the PIT design were to utilize direct mapping features among entries (i.e., use of parent pointers within the Name Prefix Hash Table or NPHT [11]) to search for matching PIT entries. However, to simplify the design, our architecture does not utilize an NPHT-like design, and PIT and FIB entries are accessed separately with no parentchild relationship. B. Security Considerations In the current CCN (or NDN) architecture, message validation process requires the Producer to sign the Data packets it generates using the partial message headers and the payload [12] (i.e., using name and the content). Since the proposed solution requires the Service Router to replace the original content name with its hash-based counterpart, if the validation process is implemented as is, we would observe validation errors along the path to the Consumer which may lead to packet drops. To prevent such problems, while ensuring that the security requirements are met, we implement the following three-step approach, which assumes the use of a single-bit validation flag in the Data packet: § As the first-step, we ensure the validity of the Data packet sent from the Producer by requiring the Service Router to validate the received packet using the embedded key and signature. Any message that fails to pass this step is dropped. § As the second-step, we require the Service Router to unset the default validation check flag in the header,

while including a short Message Authentication Code (MAC) within the message to allow the content routers to validate the integrity and authenticity of the received packet at the core. No other change is required on the Data packet, as the signature created by the Producer is also preserved. § As the third step, we require the Service Router at the Consumer side to replace the received Data packet header with the original header (based on Interest header received directly from the Consumer), removing or unsetting any additional verification fields in the Data packet. In doing so, Consumer can validate the integrity of the received Data packet, as it is sent by the Producer. Also note that, the above procedures are applied on the Data packet, as we assume the use of simple protection schemes for the Interest packet, which can be updated on the fly with minimal effort by the matching Service Routers on either side. VII. C ONCLUSION In this paper, to address the scalable forwarding problem in content centric networks, we proposed a novel hash-name based forwarding architecture which relies on the use of hierarchically formatted hash-based components to replace the human readable names. We presented an in-depth analysis of the proposed architecture and explained its operation in a typical setting. We numerically evaluated the performance of our solution under different scenarios, and showed noticeable performance improvements in storage requirements and processing overhead, emphasizing the significant gains that can be achieved in forwarding capacity. R EFERENCES [1] M. F. Bari, S. R. Chowdhury, R. Ahmed, R. Boutaba, and B. Mathieu, “A survey of naming and routing in information-centric networks,” IEEE Communications Magazine, pp. 44–53, Dec 2012. [2] G. Xylomenos, C. Ververidis, V. Siris, N. Fotiou, C. Tsilopoulos, X. Vasilakos, K. Katsaros, and G. Polyzos, “A survey of informationcentric networking research,” 2013. [3] “CCNx Protocol.” http://www.ccnx.org. [4] L. Zhang, A. Afanasyev, J. Burke, V. Jacobson, k. claffy, P. Crowley, C. Papadopoulos, L. Wang, and B. Zhang, “Named data networking,” ACM SIGCOMM CCR. [5] V. Jacobson, D. K. Smetters, J. D. Thornton, M. F. Plass, N. H. Briggs, and R. L. Braynard, “Networking named content,” in ACM CoNEXT, 2009. [6] “VeriSign domain name industry brief.” http://www.verisigninc.com/innovation/dnib/. [7] M. Varvello, D. Perino, and L. Linguaglossa, “On the design and implementation of a wire-speed pending interest table,” in IEEE NOMEN, 2013. [8] H. Dai, B. Liu, Y. Chen, and Y. Wang, “On pending interest table in named data networking,” in ACM ANCS’12, 2012. [9] S. Won, A. Narayanan, and D. Oran, “Named data networking on a router: Fast and DoS-resistant forwarding with hash tables,” in ACM/IEEE Symposium on Architectures for Networking and Communications Systems, ANCS’13, 2013. [10] D. Perino and M. Varvello, “A reality check for content centric networking,” in ACM SIGCOMM Workshop on Information-centric Networking, ICN, 2011. [11] H. Yuan, T. Song, and P. Crowley, “Scalable ndn forwarding: Concepts, issues and principles,” in IEEE ICCCN, 2012. [12] “NSF Named Data Networking project.” http://www.named-data.net/.

Suggest Documents