2011 Fifth International Conference on Innovative Mobile and Internet Services in Ubiquitous Computing
Gregex: GPU based High Speed Regular Expression Matching Engine Lei Wang 1 , Shuhui Chen 2 , Yong Tang 3 , Jinshu Su 4 School of Computer Science, National University of Defense Technology Changsha, China 1
[email protected] 2
[email protected] 3
[email protected] 4
[email protected]
Abstract— Regular expression matching engine is a crucial infrastructure which is widely used in network security systems, like IDS. We propose Gregex, a Graphics Processing Unit (GPU) based regular expression matching engine for deep packet inspection (DPI). Gregex leverages the computational power and high memory bandwidth of GPUs by storing data in proper GPU memory space and executing massive GPU thread concurrently to process lots of packets in parallel. Three optimization techniques, ATP, CAB, and CAT are proposed to significantly improve the performance of Gregex. On a GTX260 GPU, Gregex achieves a regular matching throughput of 126.8 Gbps, which is a speedup of 210× over traditional CPU-based implementation and a speedup of 7.9× over the state-of-the-art GPU based regular expression engine.
I. I NTRODUCTION Signature-based deep packet inspection (DPI) has been one of the most important mechanisms in network security systems nowadays. DPI inspects entire packets traveling the network in real time to detect threats, such as intrusions, worms, viruses and spam. Regular expression is widely used for describing DPI signatures because they are much more expressive and flexible than simple strings. Network intrusion detection systems (NIDS), like Snort[1] use regular expressions to describe more complicated signatures. Due to the limited computation power of CPU and the high latency of I/O access[2], pure software implementation of regular expression matching engine cannot satisfy the performance requirements of DPI. A possible solution is offloading regular expression matching to hardware platforms[3], [4], [5], [6], such as ASICs, FPGAs and NPs. Hardware-based solutions could achieve a high performance, but they are complex and not flexible enough. Modern GPUs are specialized for compute-intensive, highly parallel computation. Also, GPUs are more cheap and programmable than other hardware platforms. In this paper, we propose Gregex, a high speed GPU based regular expression matching engine for DPI. In Gregex, the DFA state transition table compiled from regular expressions resides in GPU’s texture memory and a large amount of packets are copied to GPU’s global memory for matching. Massive GPU threads run concurrently in the way that each GPU thread matches one packet. We describe three optimization techniques 978-0-7695-4372-7/11 $26.00 © 2011 IEEE DOI 10.1109/IMIS.2011.107
422 366
for Gregex. On a GTX260 device, Gregex achieves a regular expression matching throughput of 126.8 Gbps which is about 210 × faster than traditional CPU implementation[7] and 7.9 × faster than solution proposed in [8]. The rest of this paper is organized as follows. In Section II, we present the background knowledge and related works on GPU based regular expressions matching techniques. The design and optimization of Gregex are introduced in Section III. The performance results are evaluated in Section IV. Finally, we conclude our works in Section V. II. BACKGROUND A. Regular Expression Matching Techniques Regular expression matching engines can be based on either nondeterministic finite automata (NFA) or deterministic finite automata (DFA). In DPI, DFA approaches are preferred for better performance. In DFA approaches, a set of regular expressions are usually converted to one DFA by first compiling them into an NFA using the Thompson algorithm[9] and then converting the NFA to DFA using the Subset Construction algorithm. Given the compiled DFA and an input string representing the network traffic, DPI needs to decide whether the DFA accepts the input string. DFA is represented by a state transition table and a state acceptance table. State transition table is a two-dimensional matrix whose width and height are equal to the size of the alphabet and the number of states in DFA respectively. Each cell of the state transition table contains the next state to move to in DFA. State acceptance table is a one-dimensional array, the length of which is equal to the number of states in DFA. Each cell of the state acceptance table indicates whether the corresponding state is an acceptable state or not. DFA matching requires two state table lookups (two memory accesses) per input byte: getting the next state and deciding whether this is a acceptable state. In modern CPU, one memory access may take many cycles to return a result. In contrast, when using GPU to perform DFA matching, massively threads execution concurrently could hide the memory access latency efficiently.
60
B. The CUDA Programming Model We briefly review CUDA which defines the architecture and programming model for NVIDIA GPUs. We focus on GeForce GTX 200 series GPUs, more information could be found in the CUDA documentation [10], [11]. GPU Architecture: The GeForce GTX 200 serior GPUs are based on a reengineered, enhanced, and extended Scalable Processor Array (SPA) architecture which consists of 10 Thread Processing Clusters (TPCs). Each TPC is in turn made up of 3 Streaming Multiprocessors (SMs), and each SM contains 8 Streaming Processors (SPs). Every SM also includes texture filtering processors used in graphics processing. The GPU’s compute architecture is SIMT (single instruction, multiple threads) for execution across each SM. SIMT improves upon pure SIMD (single instruction, multiple data) designs in both performance and ease of programmability. Programming Model: In the CUDA model, data parallel portions of an application are expressed as device kernels which run on many threads. CUDA threads execute on device (GPU) that operates as a coprocessor to the host (CPU) running the C program. A CUDA kernel is executed as a grid of thread blocks. The number of threads per block and the number of blocks per grid are specified by the programmer. Threads within a block could cooperate via shared memory, atomic operations and barrier synchronization. All threads within a block are executed concurrently on a SM; several blocks can execute concurrently on a SM. Memory Hierarchy: CUDA devices use several memory spaces, which have different characteristics that reflect their distinct usages in CUDA applications. In addition to a number of 32-bit registers shared across all the active threads, each multiprocessor carries on-chip a 16KB shared memory. The off-chip global memory is connected to each SM with high transfer bandwidth, large amounts and high latency. There are also two additional read-only memory spaces that provide the additional benefit of hardware caching accessible by all threads: the constant and texture memory spaces. The global, constant, and texture memory spaces are optimized for different memory usages but their effectiveness cannot be guaranteed. C. GPU based Regular Expression Matching Engines Randy Smith et al. proposed a programmable signature matching system prototyped on an NVIDIA G80 GPU[12]. They have made a detailed analysis of regular control flow and parallelism available at the packet level. Two types for regular expression matching were examined in their work: standard DFA and extended finite automata (XFA)[13], [14]. XFA approach uses less memory than DFA but have a more complex execution control flow which can impact the performance of GPU by causing threads of the same warp to diverge. Evaluation shows the GPU based prototype achieves a speedup of 6× to 9× compared to implementation on a Pentium4 CPU. Giorgos Vasiliadis et al. presented a regular expression matching engine based on GPU[8]. In their work, regular expressions were compiled separately and the whole packets
g g g
Global Memory
60 7H[WXUH FDFKH
Packet buffer
#SP1
#SP2
...
#SP8
IU
Result buffer
State table Texture Memory
Fig. 1.
Framewok of Gregex which uses a GTX 260 GPU.
were processed by every thread in isolation. The experimental results show regular expression matching on NVIDIA GeForce 9800 GX2 GPU can achieve up to 16 Gbps of raw processing throughput, which was a 48× speedup by CPU implementations. Furthermore, they had extended the architecture of Gnort[7] by adding a GPU-assisted regular expression matching engine. The overall processing throughput of Snort was increased by a factor of eight compared to the default implementation. Shuai Mu et al. proposed a GPU based solutions for a series of core IP routing applications[15]. In their work, they implemented a finite automata based regular expression matching algorithm for the deep packet inspection application. On a NVIDIA GTX280 GPU, the proposed regular expression matching algorithm can achieve up to a matching throughput of 9.3 Gbps and a overall throughput of 3.2 Gbps. III. T HE P ROPOSED G REGEX A. Framework The framework of Gregex is depicted by Fig. 1. In Gregex, packets are stored in GPU’s global memory; the DFA state transition table resides in GPU’s texture memory. Texture memory has a hardware cache so that DFA state transition table lookup latency could be significantly reduced. In Gregex, packets are processed in batches. Each thread processes one of the packets in isolation. Whenever a match occurs, the threads stores the regular expression’s ID to the matching result buffer. Matching result buffer is a singledimension array allocated in the global device memory; the size of the array is equal to the number of packets that are processed by GPU at a time, as shown in Fig. 2(b). B. Workflow The packet processing workflow in Gregex could be divided into three phases: pre-processing phase, signature matching phase, and post-processing phase. Pre-processing and postprocessing phase run by CPU threads perform tasks of transferring packets from CPU to GPU and getting match results from GPU memory respectively. Signature matching phase runs by GPU threads and does the regular expression matching tasks.
367 423
0
0
2047
31
0
packet payload
0
regular expression ID
1
packet payload
1
regular expression ID
. . . . . l-1
. . . . . packet payload
(a)
. . . . . l-1
. . . . . regular expression ID
(b)
Fig. 2. The format of (a) packets buffer and (b) matching results buffer in GPU global memory.
1) Pre-processing phase: In pre-processing phase, Gregex does some necessary preparation works, including constructing DFA from regular expression, transferring packets to GPU. Compiling regular expressions to DFA: In our work, the state acceptance table is merged into state transition table as the last column of the state transition table when constructting DFA. Once the DFA has been constructed, the state transition table is copied to texture memory of GPU by two steps: 1. Copy state transition table from CPU memory to GPU global memory; 2. Bind the state transition table in global memory to texture cache. Transferring packets to GPU: Now we consider how packets are transferred from CPU memory to the device memory. Due to the overhead associated with each transfer, batching many packets into one larger transfer performs significantly better than making each transfer separately [11]. So Gregex chooses to copy packets to device memory in batches. The format of buffer allocated for storing packets in global memory is illustrated in Fig. 2(a). The length of the packet is set to 2KB. If packet is shorter than 2KB, Gregex pads 0x00 at the end of the packet; if packet’s longer than 2KB, Gregex splits it down into several smaller ones. The maximum IP packet may be up to 65,535 bytes in length. However, assigning the maximum packet length as the size of packet in the buffer would result in a waste of bandwidth. 2) Signature matching phase: Each GPU thread processes a respective packet in isolation during regular expression matching. Algorithm 1 gives the multi-thread version procedure for DFA matching on GPU. Algorithm 1. multi-thread DFA matching procedure. Input: packets : a batch of packets to match Input: DF A : state transition table Output: Results : match results 1 packet ← packets[thread ID]; 2 current state ← 0; 3 foreach byte in packet do 4 input ← packet[byte]; 5 next state ← DF A[current state, input]; 6 current state ← next state; 7 if DF A[current state, alphabet size + 1] = 1 then 8 Results[thread ID] ← regex ID; 9 end 10 end
Line 1 gets the address of the packet to match according to thread’s global ID. Lines 2-10 do work of DFA matching: at each iteration of foreach loop, matching threads read one byte from packet, look up state transition table for the next state and determine whether it is an acceptable state. If DFA goes to an acceptable state, the ID of the regular expression that matched packet is recorded to Results. 3) Post-processing phase : When all GPU threads finish matching, the matching result array is copied to the CPU memory. The kth cell of the matching result array contains the ID of the regular expression that matches the kth packet; if no match occurs, it is set to zero. C. Optimizations Gregex exploits optimization opportunities in workflow by maximizing parallelism as well as reducing GPU memory access latency. Three optimization techniques, ATP, CAB, and CAT are proposed to improve the performance of Gregex. 1) Asynchronous packets Transfer with Page-locked memory (ATP): Packets transferring throughput is the most important performance factor of Gregex. Higher bandwidth between the host and the device is achieved when using page-locked memory [11]. Asynchronous copy: In CUDA, data transfers between the host and the device using cudaMemcpyAsync function is nonblocking transfers, control is returned immediately to the host thread. Asynchronous copy enables overlap of data transfers with host and device computations. Zero copy: Zero copy requires mapped page-locked memory and enables GPU threads to directly access host memory. Zero copy can make kernel execution overlap data transfers automatically. 2) Coalesced global memory access in regular expression matching: Global memory access has a very high latency, about 400-600 cycles for a load/store operation. All global memory accesses by a half-warp1 of threads can be coalesced into one or two transactions if these threads access a contiguous range of addresses. In In Algorithm 1, special attention must be paid to how threads loading packet from global memory and storing matching results to Results are performed. Coalesced global memory Access by Buffering packets to shared Memory (CAB) In this work, coalesced global memory access is obtained by having each half warp reading contiguous locations of global memory to shared memory. There is no performance penalty for non-contiguous access in shared memory as there is in global memory. We use s packets which is a 32×32 shared memory array of 32-bit words, to ”buffer” packet from global memory for every thread. If the total length of packet is L bytes, it will take totally L/32 iterations for a thread to process a packet. In each iteration, threads in a block read data to s packets corporately to void uncoalesced global memory access, and then begin to match signatures on one row of s packets separately. 1 In CUDA, warp is a group of threads executed physically in parallel; half-warp is the first or second half of a warp of threads.
368 424
TABLE I P ERFORMANCE COMPARISON BETWEEN G REGEX AND OTHER GPU BASED IMPLEMENTATIONS .
Hardware
Algorithm
Throughput(Gbps)
Speedup
GTX260 GTX260
DFA(CAT) DFA(CAB)
126.8 26.9
4.7
8600GT2 9800GX23 GTX2804
Gnort AC DFA AC
1.4[7] 16[8] 9.3[15]
90.5 7.9 13.6
1
l
Fig. 3.
1
The format of packets buffer after transposing.
2 45
3
40
4
Throughput(Gbps)
35 30
contains 216 SPs organized in 27 SMs, running at 1.35GHz with 896 MB of memory contains 32 SPs organized in 4 SMs, running at 1.2GHz with 512 MB of memory consists 256 SPs organized in 16 SMs, running at 1.5GHz with 512 MB of memory. contains 240 SPs organized in 30 SMs, running at 1.45GHz with 1024 MB of memory
25 20 15
IV. E VALUATION R ESULTS
10 5 0
8MB 1
2
3
4
5 6 7 8 log(Data Size/64K)
256MB 9
10
11
12
A. Experimental Setup
13
Fig. 4. Throughput of transferring packets to NVIDIA GTX 260 GPU with different sizes.
However, shared memory bank conflict will occur if two or more threads in a half warp access bytes within different 32-bit words belonging to the same bank. A way to avoid this conflict is to pad the shared memory array by one column. When changing the size of s packets to 32×33, data in cell (i, j) and cell (x, y) of s packets are mapped to the same bank if and only if |(i ∗ 33 + j) − (x ∗ 33 + y)| mod banks num = 0 where bank num = 16 in current GPU architecture. When threads in half warp read data in the same column, that is j = y, we have, |(i − x) ∗ 33 + (j − y)| mod banks num = |i − x| ∗ 33 mod 16 Thus bank conflict will never occur in a half warp since |i − x| ∗ 33 mod 16 = 0. Coalesced global memory Access by Transposing packets buffer (CAT)Another technique to avoid uncoalesced global memory access is to transpose the packets buffer before matching. Transposing the packets buffer is similar to transpose a matrix. A detailed documentation[16] about optimizing matrix transpose in CUDA by Greg Ruetsch is released along with the CUDA SDK. In our work, we implement a high performance CUDA matrix transpose kernel simply following Ruetsch’s steps in [16]. With the packet buffer transposing, the total time cost of packet processing by Gregex consists of time used to transfer packets to GPU memory, time used to transpose the packets buffer and time used to match packets with signatures. Transpose the packets buffer will make half warp of GPU threads access a contiguous range of addresses, as shown in Fig. 3.
Gregex is implemented on a PC with a 2.66 GHz Intel Core 2 Duo processor, 4 GB memory and a NVIDIA GeForce GTX 260 GPU card. GTX260 GPU contains 216 SPs organized in 27 SMs, running at 1.35 GHz with 896 MB of global memory. We implement Gregex under CUDA version 3.1 with device driver version 257.21. Gregex uses signatures in the rule set released with Snort 2.7. The rule set consists of 56 different signature sets. For each signature set, we construct a single DFA for all the regular expressions in it. We use two different network traces for evaluating the performance of Gregex: trace collected on the internet and trace from the 1998-1999 DARPA intrusion detection evaluation data set [17]. In our experiments, Gregex reads packets from local disk, and then transfers them in batches to GPU memory for processing. B. Packets Transfer Performance We first evaluate the throughput of packets transfer from CPU memory to GPU global memory. The throughput of transferring packets to the GPU varies depending on the data size. For this experiment we test two different kinds of host memories: page-locked memory and pageable memory. Pagelocked memory cannot be swapped out to disk by operating system before the GPU is finished using it, so it’s faster than pageable memory, as shown in Fig. 4. Both the graphics card and mainboard in our system support PCI-E ×16 Gen2. The theoretical peak bandwidth between host memory and device memory (64 Gbps) is far exceeded what we obtain actually. Larger transfer performs significantly better than smaller transfer, but, when data size is larger than 8MB, the throughput no longer increases notably. C. Regular Expression Matching Performance In this experiment, we evaluate the processing performance of Gregex which is measured as the mean bits of data
369 425
140
CAT CAB
VI. ACKNOWLEDGMENT
126.8 Gbps
120
This work has been supported by the National HighTech Research and Development Plan of China under Grant No.2009AA01A346.
26.9 Gbps
Throughput (Gbps)
100
80
60
R EFERENCES
40
20
0
0
64
128
192
256
320
384
448
512
Blocks num per grid
(a) 35 ATP+ CAT CAT ATP + CAB CAB
30
25.6 Gbps
Throughput(Gbps)
25
20
15
10
5
0
64
128
192
256 320 Blocks num per grid
384
448
512
(b) Fig. 5. Performance of Gregex. (a) Regular expression matching throughput and (b)overall throughput.
processed per second. From Fig. 5(a), we can see that Gregex gets a regular expression matching throughput of 126.8 Gbps in the best case. Table I compares Gregex with other GPU based regular expression matching engines. The performance statistics presented in Table I are raw performance: the time used for transferring packets to GPU memory is not included in the processing time. Gregex is about 7.9× faster than the stateof-the-art GPU solution proposed in [8]. D. Overall throughput of Gregex We now evaluate the overall performance of Gregex. As shown in Fig. 5(b), the best cast overall performance of Gregex is 25.6 Gbps when the packet transferred asynchronously to GPU global memory use page-locked memory, which is 8× faster than proposed in [15]. V. C ONCLUSION A high speed GPU based regular expression matching engine, Gregex, is introduced in this paper. Gregex takes advantage of the high parallelism of GPU to process packets in parallel. We describe three optimization techniques for Gregex in details, including ATP, CAB, and CAT. These optimization techniques significantly improve the performance of Gregex. Our experimental results indicate that the performance of Gregex is about 7.9× faster than the state-of-the-art GPU based regular expression engine. Gregex is high-flexible, lowcost as well as high-speed. We can easily apply Gregex to network security applications such as IDS and anti-virus systems.
[1] Snort, www.snort.org. [2] N. Jacob and C. Brodley, “Offloading ids computation to the gpu,” in Proceedings of the 22nd Annual Computer Security Applications Conference. IEEE Computer Society, 2006, pp. 371–380. [3] S. Kumar, J. Turner, and J. Williams, “Advanced algorithms for fast and scalable deep packet inspection,” in Proceedings of the 2006 ACM/IEEE symposium on Architecture for networking and communications systems. San Jose, California, USA: ACM, 2006, pp. 81–92. [4] F. Yu, Z. Chen, Y. Diao, T. V. Lakshman, and R. H. Katz, “Fast and memory-efficient regular expression matching for deep packet inspection,” in Proceedings of the 2006 ACM/IEEE symposium on Architecture for networking and communications systems. San Jose, California, USA: ACM, 2006, pp. 93–102. [5] B. C. Brodie, R. K. Cytron, and D. E. Taylor, “A scalable architecture for high-throughput regular-expression pattern matching,” SIGARCH Comput. Archit. News, vol. 32, no. 2, pp. 191–202, 2006. [6] M. Becchi, C. Wiseman, and P. Crowley, “Evaluating regular expression matching engines on network and general purpose processors,” in Proceedings of the 2009 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS), Princeton, New Jersey, 2009. [7] G. Vasiliadis, S. Antonatos, M. Polychronakis, E. P. Markatos, and S. Ioannidis, “Gnort: High performance network intrusion detection using graphics processors,” in Proceedings of the 11th international symposium on Recent Advances in Intrusion Detection. Cambridge, MA, USA: Springer-Verlag, 2008, pp. 116–134. [8] G. Vasiliadis, M. Polychronakis, S. Antonatos, E. P. Markatos, and S. Ioannidis, “Regular expression matching on graphics hardware for intrusion detection,” in Proceedings of the 12th International Symposium on Recent Advances in Intrusion Detection, Saint-Malo, France, 2009, pp. 265–283. [9] K. Thompson, “Programming techniques: Regular expression search algorithm,” Commun. ACM, vol. 11, no. 6, pp. 419–422, 1968. [10] NVIDIA, “Cuda c programming guide version 3.1.” [11] ——, “Cuda c best practices guide version 3.1.” [12] R. Smith, N. Goyal, J. Ormont, K. Sankaralingam, and C. Estan, “Evaluating gpus for network packet signature matching,” in Proceedings of the International Symposium on Performance Analysis of Systems and Software, 2009. [13] R. Smith, C. Estan, and S. Jha, “Xfa: Faster signature matching with extended automata,” in IEEE Symposium on Security and Privacy. IEEE Computer Society, 2008, pp. 187–201. [14] R. Smith, C. Estan, S. Jha, and S. Kong, “Deflating the big bang: fast and scalable deep packet inspection with extended finite automata,” SIGCOMM Comput. Commun. Rev., vol. 38, no. 4, pp. 207–218, 2008. [15] S. Mu, X. Zhang, N. Zhang, J. Lu, Y. S. Deng, and S. Zhang, “Ip routing processing with graphic processors,” in the Design, Automation and Test in Europe, 2010, pp. 93–99. [16] G. Ruetsch and P. Micikevicius, “Optimizing matrix transpose in cuda,” 2009. [17] J. McHugh, “Testing intrusion detection systems: a critique of the 1998 and 1999 darpa intrusion detection system evaluations as performed by lincoln laboratory,” ACM Trans. Inf. Syst. Secur., vol. 3, no. 4, pp. 262– 294, 2000.
370 426