a hardware-accelerated system for real-time worm detection - CiteSeerX

2 downloads 0 Views 228KB Size Report
Bharath Madhusudan. John W. Lockwood. Washington University in. St. Louis. SYSTEMS THAT SECURE NETWORKS AGAINST MALICIOUS CODE WILL BE A.
A HARDWARE-ACCELERATED SYSTEM FOR REAL-TIME WORM DETECTION SYSTEMS THAT SECURE NETWORKS AGAINST MALICIOUS CODE WILL BE A PART OF CRITICAL INTERNET INFRASTRUCTURE IN THE FUTURE. THIS ARTICLE PRESENTS THE DESIGN AND IMPLEMENTATION OF A SYSTEM THAT AUTOMATICALLY DETECTS NEW WORMS IN REAL TIME BY MONITORING ALL TRAFFIC ON A NETWORK. THE SYSTEM USES FIELD-PROGRAMMABLE GATE ARRAYS (FPGAS) TO SCAN PACKETS FOR PATTERNS OF SIMILAR CONTENT AND CAN AUTOMATICALLY DETECT THE OUTBREAK OF A NEW INTERNET WORM. IT INSTANTLY REPORTS FREQUENTLY OCCURRING STRINGS IN PACKET PAYLOADS AS LIKELY SIGNATURES OF THE MALICIOUS SOFTWARE (MALWARE).

Bharath Madhusudan John W. Lockwood Washington University in St. Louis

60

Internet worms work by exploiting vulnerabilities in operating systems and application software that run on end systems. The attacks compromise security and degrade network performance. They cause large economic losses for businesses, in terms of system downtime and lost worker productivity. Moore et al. describe several approaches to defend against worms:1 • Prevention. Worms usually work by exploiting vulnerabilities in software. Prevention involves writing secure code. The language security community had done much work toward achieving this goal, but a solution remains distant. • Treatment. Programmers analyze the vulnerability that the worm exploits, and

they release a patch that fixes it. However, it takes time to analyze and patch software. In addition, many users might never apply the patch. As a result, a significant fraction of machines in the network remain vulnerable. • Host-based containment systems. Hostbased defense can restrict the number of active connections. This limits the number of worm probes that hit other vulnerable machines on the Internet. A problem with host-based defense is that the compromise of one node makes host based defense useless for that node. • Network containment. The advantage of using network-based containment is that just a few intrusion detection and prevention systems (IDPSs) can protect an

Published by the IEEE Computer Society

0272-1732/05/$20.00  2005 IEEE

entire corporate or university department network. An issue with current intrusion detection systems is that they use signatures, and thus can detect only known worms, leaving the network vulnerable to unknown worms. Here, we focus on the design and implementation of a containment system that protects the network against unknown worms. Design goals of the system include the following: • Low reaction time. The system should detect anomalies quickly and begin blocking malicious data flows to limit damage caused by the worm. • High throughput. Systems must keep up with today’s high-speed links, like gigabit Ethernet and OC-48, monitoring all traffic in real time. Software-based systems can perform only a limited number of operations within the time period of a packet transmission. This necessitates the use of hardware. • Low cost. The system should fit within limits afforded by today’s FPGAs and ASICs. • Low false-positive rate. The system must generate a minimum number of false positives. Otherwise, the system will ignore warnings and possibly block legitimate traffic. • Robustness to countermeasures. The system should have the ability to detect polymorphic worms that work by inserting no-ops and random code in the worm payload. We designed our system to work in tandem with an IDPS such as that presented in Lockwood et al.2 Taking a short time to detect a worm signature allows quick programmability of the IDPS with the new signature, limiting further damage. Figure 1 shows a typical configuration for our IDPS. All traffic between subnets passes through a router. Standard Ethernet switches with VLAN support can concentrate traffic from multiple subnets into a link that passes through the signature detection device to the router port. By placing the signature detection device between the router and VLAN concentrator, all traffic between subnets is scanned for content.

Internet

Signature detection device

Router

VLAN concentrator Host Subnet

Subnet

Host

Host

Subnet Host

Host Host

Host

Physical link

Traffic flow

Figure 1. Typical system configuration.

Related work Staniford, Paxson, and Weaver showed why worms are a major security threat.3 They modeled a worm’s spread with the susceptibles/infectives (SI) model. They also described techniques that worm authors could use to improve their code’s effectiveness. For example, many worms today spread by random probing. The authors propose hit-list scanning wherein the worm author decides on a list of vulnerable machines before deciding to attack. Moore et al. examine the requirements of a system such as the one presented in this article. The authors simulate the spread of a worm and examine the relative effectiveness of two approaches in containing spread. One approach is address blacklisting and another is content scanning and filtering (our approach). Their results demonstrate content filtering to be superior to address blacklisting and highlight the need for a system capable of real-time worm detection. The EarlyBird System outlines the design of a system that detects unknown worms in real time.4 The system uses three criteria to flag a worm: • Identifying large flows of content. Because worms consist of malicious code, frequently repeated content on the network can be a useful warning of worm activity.

JANUARY–FEBRUARY 2005

61

HOT INTERCONNECTS 12

portions of the IP address space are known to remain unused. Since many worms use random probing, activity on those addresses might be a warning sign.

Streaming data Byte leaving

10-byte string

Character filter

Hash

Alert generator

Count vector (on chip)

Time average

Software IDS

Byte entering

Threshold FPGA

Query

Reply

SRAM analyzer

Figure 2. Functional-block diagram of the system.

Estan and Varghese outline methods to identify large flows.5 In this case, they take a hash of packet content in combination with destination port to be the flow identifier because different worms target different ports. Even if the worm authors insert no-ops and random data at the beginning and the end of their code, computing multiple hash values over each packet can still detect the attack. • Counting the number of unique sources and destinations. Estan, Varghese, and Fisk outline mechanisms to estimate the number of sources and destinations on a link.6 They count the sources and destinations corresponding to the signatures that their previous mechanism identified. This method relies on the notion that a drastic increase in the number of sources and destinations is more characteristic of a worm than for a Flash Crowd (in which case multiple clients connect to a single server). • Determining port scans. This technique relies on counting the number of connection attempts to unused portions of Internet Protocol address space. Certain

62

IEEE MICRO

Kumar et al. use a data structure similar to ours to infer flow size distribution.7 Their method involves calculating the number of partitions of different counter values, a computationally intensive problem. The fact that the distribution of flow sizes is zipfian (a small number of flows consume most of the bandwidth) simplifies the partitioning problem. We do not believe that you can make the same assumptions about keyword distributions in random traffic. Given this premise, their methods are not applicable to our problem, owing to the computational infeasibility of those methods.

System overview This system draws from two ideas presented in EarlyBird: • The worm detection system looks for frequently occurring content. • It is robust to techniques employed by authors of polymorphic worms. Our work’s contribution lies in providing a detailed design for a system that we have implemented in hardware to run at high speed. Figure 2 shows a functional-block diagram of the system. We explain the functionalities of the blocks in the figure in the following sections.

Hash function To detect commonly occurring content, we calculate a k-bit hash over a 10-byte (80-bit) window of streaming data. To compute the hash, we generate set k × 80 random binary values at system configuration. The system computes each bit of the hash as the XOR over the randomly chosen subset of the 80bit input string. Randomizing the hash function prevents adversaries from determining a pattern of bytes that would cause excessive hash collisions.

Hash computation In our system, we compute a hash over every 10-byte string of the payload data. For every byte that enters the system, a new hash

is computed over the last 10 bytes in the stream. As with EarlyBird,4 the multiple hash computations over each payload ensure the thwarting of simple polymorphic measures.

Threshold Average

Count vector Our system uses the hash value to index into a vector of counters. When a signature hashes to a counter, the counter increments by one. When a counter reaches a predetermined threshold, it generates an alert and resets its value to zero. At periodic intervals (henceforth called measurement intervals), the system decrements the counts in each of the counters by the average number of arrivals attributed to normal traffic. To implement the circuit on a Xilinx FPGA, we implement the count vector by configuring dual-ported, on-chip Block RAMs as an array of memory locations. Each of the memories can afford one read operation and one write operation every clock cycle. This allows us to implement a three-stage pipeline to read, increment, and write memory every clock cycle. The signature changes every clock cycle, and the system counts every occurrence of every signature.

Character filter A character filter allows the exclusion of selected characters from the hash computation. Since worms typically consist of binary data, a worm detection system can ignore some ASCII characters that are highly unlikely to be a part of binary data. These characters include nulls, line breaks, new lines, and white space in text streams. Text documents, for example, contain a significant amount of white space and nulls for padding. Another reason to avoid these characters is that strings of nulls or white space do not necessarily characterize a good signature that is useful in identifying a worm. It is better to use strings that would not appear in documents. We must clarify that our method is a heuristic approach that has been shown effective at avoiding bad signatures. Better approaches might involve (but are not limited to) identifying and ignoring text in e-mail messages, preprocessing entire strings, or stream editing to search for regular expressions and replace them with strings.8

M

Figure 3. Counter values at the end of a measurement interval.

SRAM analyzer An SRAM analyzer stores suspicious signatures and estimates how often a certain signature has occurred. The SRAM analyzer’s purpose is to reduce the number of alerts sent. When an on-chip counter crosses the threshold, the offending string is hashed to a table in off-chip SRAM. The next time the same string occurs, we hash to the same location in SRAM and compare the two strings. Having two strings that are the same generates an alert. If the two strings are different, we perform an overwrite of the SRAM location and store the other string. In this case, it is likely that the counter overflow occurred because several semi-frequently occurring strings were hashed to the same value. Since semi-frequently occurring strings are not of interest, the SRAM analyzer prevents the system from incurring the overhead of generating an alert packet.

Alert generator On receiving confirmation from the SRAM analyzer that a signature is indeed frequently occurring, the system sends a User Datagram Protocol (UDP) control packet to an external PC that is listening on a well-known port. The packet will contain the offending signature, that is, the string of bytes over which the hash was computed. The PC then flags the most frequently occurring strings as suspicious.

Periodic subtraction of time averages There are two approaches to handling the end of measurement intervals. One approach resets the counters periodically.5 After a fixed window of bytes passes through the link, all of the counters are reset by writing the values to zero. However, this approach has a shortcoming as illustrated in Figure 3. If the value of a counter corresponding to a malicious signature (labeled M in the figure) is just below the

JANUARY–FEBRUARY 2005

63

HOT INTERCONNECTS 12

Streaming data Bytes entering

Bytes leaving

PE1

PEn

M1

Mn

Figure 4. Minimizing contention by segmenting count vectors into smaller memories.

threshold at the time near the end of the measurement interval, then resetting this counter will result in the signature going undetected. We can use an alternative approach. Rather than reset the counters, we periodically subtract an average value from all the counters. We compute the average value as the expected number of bytes that would hash to the bucket in each interval. Our simulations show that using periodic subtraction rather than counter reset provides a significant increase in the number of signatures found.

Design considerations So far, we have covered the major aspects of the system’s design. In this section, we detail how we achieve high performance and how our design covered certain edge cases.

Maintaining throughput Figure 2 showed the system to process one byte at a time. To achieve higher throughput, our system can process multiple strings in each clock cycle. By augmenting the system with multiple parallel count vectors, we can hash multiple windows of bytes in parallel. To obtain an accurate count of how often a signature has occurred, we must compute a sum of the corresponding counters in all count vectors in every clock cycle. To allow multiple memory operations in parallel, we segment the count vector into multiple banks (Block RAMs). The higher order bits of the hash value are used by the system

64

IEEE MICRO

to determine which Block RAM to access. The lower bits are used to determine which counter to increment within a given Block RAM. Figure 4 shows the modified system. It is possible that more than one string could hash to the same Block RAM (M1 through Mn in the figure). We call this situation a bank collision, which priority encoders (PE1 through PEn in the figure) resolve. This means that the system might not count between one and three strings every clock cycle. Probability of collision c is N −1

c = 1−



i N i − N − B +1

(1)

where N is the number of block RAMs, and B is the number of bytes coming in every clock cycle. We examine the impact of bank collisions on the operation of the system later.

Benign strings Certain strings such as the first several bytes of an HTTP request are commonly occurring strings but are not themselves a worm or virus. The system administrator can mark such strings as benign. There are several ways to deal with benign strings. One method, as stated earlier, is to preprocess strings with the character filter. For a limited number of common strings, it is possible to program the hardware to not increment certain hash buckets, thus avoiding the sending of alerts. But as the number of benign strings approaches the number of counters, this method reduces the system’s effectiveness because it devotes fewer counters to detecting signatures. For a much larger number of less commonly occurring strings, it is possible to avoid falsepositive generation in software. However, because the aim was to reduce the control traffic sent to software, the control host can handle benign strings that do not occur frequently.

Threshold and measurement interval Within a measurement interval, the number of signatures observed approximately equals the number of characters processed. It is less only because bank collisions cause the system to skip a small fraction of the characters. Determining the threshold for generating an alert given the length of the measurement

interval reduces to a well-studied problem in hashing. Namely, if you hash m elements to a table with b buckets, the probability that the number of elements hashing to the same bucket exceeds i is bounded by9 (2)

In this expression, i is the threshold. In our case, m signatures hash into b counters. Hence, given a length of measurement interval, we can vary the threshold to make the upper bound on the probability of a counter exceeding the threshold acceptably small. This in turn reduces the number of unnecessary SRAM accesses. The idea is that because incoming signatures hash randomly to the counters, anomalous signatures are likely to cause counters to exceed threshold for appropriately large thresholds. In our hardware implementation, we take the measurement interval m to be 2.5 Mbytes, the number of counters b to be 8,192, and threshold i to be 850. In this case, the bound on the probability of counter overflow for random traffic is 1.02 × 10−9.

Performance evaluation We used simulation to study the effect of memory bank conflicts on the system. We also checked how well the probability bound predicts detection of commonly occurring content. Finally, we demonstrated the performance of our hardware implementation by injecting worm signatures into trace data.

Trace data We used real traces to provide realistic background traffic for simulations (www-nrg.ee. lbl.gov/anonymized-traces.html). They contain anonymous FTP connections to public FTP servers at the Lawrence Berkeley National Laboratory during a 10-day interval. Each of the traces spans a day. We concatenated two of the traces and also made the following changes to them: • We stripped header information from the packet and only preserve packet payloads since only they are of interest. We also preserve packet boundaries. • We randomly inserted signatures derived from the payload of the SoBigF worm as it appeared in a MIME 64 encoding. We

Empirical probability Theoretical probability

0.9 0.8 Probability of collision

b × (em/ib)i

1.0

0.7 0.6 0.5 0.4 0.3 0.2 0.1 0

20

40

60

80

100

120

140

No. of memory banks

Figure 5. Memory collisions decrease with the use of more dual-ported memories.

did so at packet boundaries in the trace data so that the insertions constituted a fixed percentage of the overall traffic.

Software simulation We wrote a software application to process the traces for the simulation. The application emulates the system described earlier, with a configuration consisting of 64 memory banks, each with 512 eight-bit counters. The threshold was 230 and the time-out was 2.5 Mbytes. By Equation 2, the probability of a counter crossing the threshold is 1.5 × 10−6. The system resolved collisions between banks as described earlier. We assumed that four bytes entered the system every clock cycle. The hash functions are the same as the ones used in the hardware implementation. The simulation also used random traffic, and counters incremented at random. The hash function that it used for random traffic was the rand() function in C.

Impact of bank collisions Results from this simulation answered the question of how much on-chip memory the system needs to keep the number of collisions at an acceptable level. With the value of B = 4 in Equation 1, Figure 5 shows the theoretical

JANUARY–FEBRUARY 2005

65

HOT INTERCONNECTS 12

Probability that incoming signature will overflow (over all possible inputs)

1.0 Probability of overflow with random traffic Empirical probability of overflow with FTP trace data

0.1

0.01

0.001 10

100 Threshold (count value)

1,000

Figure 6. Probability of counter overflow.

and empirical probability of collisions for 4, 8, 16, 32, 64, and 128 block RAMs. The probability of collision as seen with trace data agrees with the values predicted by theory. As Figure 5 shows, sufficiently-high amounts of Block RAM keeps the collision rate low. For instance, the probability of collision with 64 block RAMs (256 Mbits) is 0.09. In this case, the system will skip the counting of only 9 percent of all possible input strings.

Relationship between time-out and measurement interval We used the traces without signatures to simulate the system and check how well Equation 2 predicts the rate of counter overflow. The theoretical bound on the probability of counter overflow decreases exponentially over a short span of values. Figure 6 shows the empirical probability of counter overflow (calculated using FTP trace data) as well as the probability of overflow with random data. We derive these values by dividing the number of overflows by the total number of bytes processed. The probabilities with random and trace data follow each other closely until about a threshold of 100. After this point, the random data’s overflow probability quickly drops to zero; trace data’s overflow probability approaches zero but does not reach it even for very high thresholds. Figure 7 provides some insight as to why

66

IEEE MICRO

this might be happening: Figure 7 plots the fraction of counters that overflow for increasing threshold using both random and FTP trace data. Although this fraction quickly decreases to zero for random data, the FTP data results in a significant fraction of the counters (approximately 3 percent) overflowing even for larger thresholds. This is because of the presence of benign strings in FTP trace data. Benign strings are also the reason why the probability of overflow for FTP trace data approaches zero slowly. Figure 6 also shows that the random data’s overflow probability follows the sharp falloff predicted by Expression 2 more closely than FTP trace data.

Effectiveness of hardware implementation Worms vary widely in size from a few hundred bytes (SQL Slammer) to thousands of bytes (Sasser). Hence, we considered the robustness of the system to various worm sizes. We took the minimum required percentage for detection as a metric for system effectiveness. This is a good metric because worms normally spread by probing other vulnerable hosts on the Internet. Because the system relies on a counter exceeding a threshold before it issues an alert, the more occurrences of the signature, the better the chances that the system will detect it.

Fraction of overflowed counters (over all possible counters)

1.0 Fraction of counters overflowed with random traffic Fraction of counters overflowed with FTP trace data

0.8

0.6

0.4

0.2

0

50

100

150

200

250

Threshold (count value)

Figure 7. Fraction of overflowed counters.

25,000

24,000

23,000 No. of alerts

This implies an inverse relationship between the size of the worm and the sensitivity of the system to stealth. Consider two worms of sizes 500 and 20,000 bytes. In both cases, the worms would have to pass through the system a fixed number of times before it detects them. But given the difference in size between the two, the concentration (defined as a percentage of total payload bytes processed) of the smaller worm in the Internet traffic will be far less than that of the larger worm. Figure 8 shows the number of alerts generated for increasing numbers of a 500-byte signature of the SoBigF worm in trace data. Trace data was passed through the actual hardware system to obtain these results. The number of alerts increases monotonically with increasing percentages of the worm in the trace data. The first point in the graph represents the trace without any worm signature. The high number of alerts is indicative of the fact that the system found the same set of strings over and over again. The number of unique strings found was far less for each point in the graph. For longer signatures, the trend in number of alerts was similar. However, the number of worm signatures required to produce a change in the number of alerts differed in each case. For example, in the case of a 20,000-byte signature, the worm signature had to constitute 5 percent of the trace data before the number of alerts changed. In other words, a trace with 4 percent of the traffic containing a worm of size 20,000 bytes was indistinguishable from a trace containing no worm traffic.

22,000

21,000

20,000

Functional prototype We targeted the circuit discussed here to run on the Field-Programmable Port Extender (FPX) platform.10 The FPX enables the processing of high-speed network flows using large FPGAs. Users can download application circuits to the Xilinx XCV2000E FPGA on the FPX card, over a network. Network traffic is clocked into this device using a 32-bit-wide data word. We designed our circuit to fit into the layered protocol wrappers developed for FPX.11 The wrappers allow the processing of high-level packets in reconfigurable logic and allow a client application to send and receive packets with FPGA hardware. We have modularized the system’s functionality as follows:

19,000

18,000 0

2 4 6 8 Concentration of worm traffic in trace data (percentage)

10

Figure 8. Number of alerts generated for a 500-byte signature.

• Character filter. The character filter receives 4 bytes per clock cycle from the wrappers and determines which of the 4 bytes are valid (it considers nulls, new lines, and tabs as invalid). • Byte shifter. This module shifts the

JANUARY–FEBRUARY 2005

67

HOT INTERCONNECTS 12

Table 1. XCV2000E FPGA device utilization.

Resources Logic slices Block RAMs Flip-flops

Device usage 10,067 out of 19,200 59 out of 160 12,164 out of 38,400

Utilization (percentage) 52 36 31

Table 2. Hardware-software performance comparison. Platform AMD Athlon 2400+ FPGA circuit on the FPX Platform

Throughput (Mbps) 63 1,860

signature by 0 to 4 bytes, depending on the character filter’s output. For instance, if the character filter omits one character, the byte shifter shifts the signature by 3 bytes. • Hash. The hash module calculates four hash values per clock cycle. • Priority encoder. Based on the high-order bits of the hash value, a priority encoder resolves collisions between block RAMs. The number of valid signatures per clock cycle varies from 0 to 4 because of the character filter. • Read, increment, and write back. Each of these modules operates in 1 clock cycle. They access the address and data buses of the block RAMs based on the priority encoder’s output. We implemented a functional prototype of the system on FPX. The circuit easily fits within the Xilinx Virtex 2000E FPGA on that platform as can be seen in Table 1. The circuit has a post place-and-route frequency of 63.4 MHz. The circuit has a pipeline delay of 7 clock cycles, which introduces a delay of only 70 ns in the data path. Each of the outlined modules has only a one-clock-cycle delay. Thus, the system is capable of detecting and responding to worms in real time and only adds a latency equivalent to a short piece of optical fiber. Table 2 compares hardware and software performance. Our hardware prototype resulted in a speedup of 23× over an equivalent software implementation.

68

IEEE MICRO

T

he design of a system for detecting frequently occurring signatures in network traffic has been presented. We have shown how the system scaled to process gigabits/second of data using FPGA hardware on the FPX network platform. By exploiting parallelism afforded by hardware, the system is able to scan far more network traffic content than software-based approaches. Throughput was improved by computing hash functions in hardware and using on-chip memories to implement parallel counters. In software, the computation of the hash followed by the update of counters required a large number of instructions executed sequentially. Prior to this work, network-monitoring tools required that system administrators of high-speed networks use their intuition to detect anomalies in network traffic. This system automates much of that process by detecting network traffic that contains frequently occurring content. Acknowledgments We thank the reviewers for their comments. Bharath Madhusudan thanks Sarang Dharmapurikar for useful discussions. This work was supported by Global Velocity, and we have received equity from the license of technology to that company. John Lockwood is cofounder of Global Velocity and has served it as a consultant. References 1. D. Moore et al., “Internet Quarantine: Requirements for Containing SelfPropagating Code,” Proc. IEEE Infocom 2003, IEEE Press, 2003, pp. 1901-1910. 2. J.W. Lockwood et al., “Application of Hardware Accelerated Extensible Network Nodes for Internet Worm and Virus Protection,” Active Networks: IFIP TC6 5th International Workshop, IWAN 2003, Kyoto, Japan, December 10-12, 2003, Lecture Notes in Computer Science 2982, SpringerVerlag, 2003, pp. 44-57. 3. S. Staniford, V. Paxson, and N. Weaver, “How to Own the Internet in Your Spare Time,” Proc. Usenix Security Symp., Usenix Assoc., 2002. 4. S. Singh et al., The EarlyBird System for the Real-Time Detection of Unknown Worms, tech. report CS2003-0761, Dept. of

5.

6.

7.

8.

9.

10.

11.

Computer Science, Univ. of Calif., San Diego, Aug. 2003. C. Estan and G. Varghese, “New Directions in Traffic Measurement and Accounting,” Proc. 2002 Conf. Applications, Technologies, Architectures, and Protocols for Computer Comm., ACM Press, 2002, pp. 323-336. C. Estan, G. Varghese, and M. Fisk, “Bitmap Algorithms for Counting Active Flows on High-Speed Links,” Proc. 3rd ACM SIGCOMM Conf. Internet Measurement, ACM Press, 2003, pp. 153-166. A. Kumar et al., “Data Streaming Algorithms for Efficient and Accurate Estimation of Flow Size Distribution,” Proc. Joint Int’l Conf. Measurement and Modeling of Computer Systems, ACM Press, 2004, pp. 177-188. J. Moscola et al., “Implementation of a Streaming Content Search-and-Replace Module for an Internet Firewall,” Proc. 11th Symp. on High-Performance Interconnects, IEEE CS Press, 2003, pp. 122-129. G. Gonnet and E. Brewer, Handbook of Algorithms and Data Structures, AddisonWesley, 1991. J.W. Lockwood, “An Open Platform for Development of Network Processing Modules in Reprogrammable Hardware,” Proc. IEC DesignCon 01, Int’l Eng. Consortium, 2001, pp. WB-19. F. Braun, J. Lockwood, and M. Waldvogel, “Protocol Wrappers for Layered Network

10

Bharath Madhusudan was a student at Washington University in St. Louis. His research interests include the design and implementation of high-speed networking systems. Madhusudan has an MS in computer science from Washington University in St. Louis and a BS in computer science from the University of Maryland at College Park. John W. Lockwood is an assistant professor in the Department of Computer Science and Engineering at Washington University in St. Louis. His research interests include the design and implementation of networking systems in reconfigurable hardware. Lockwood and his research group developed the Field-Programmable Port Extender (FPX) to enable rapid prototype of extensible network modules. He has a BS, an MS, and a PhD in electrical engineering from the University of Illinois, Urbana-Champaign. He is a member of the IEEE, the ACM, Tau Beta Pi, and Eta Kappa Nu. Direct questions and comments about this article to John Lockwood, Dept. of Computer Science and Engineering, Washington University, 1 Brookings Dr., Campus Box 1045, St. Louis, MO 63130; [email protected].

great reasons to renew your IEEE Computer Society membership 1.

New for 2004, an online reference book membership benefit – free!

2.

Access to any or all of 100 distance-learning courses – free!

3.

Personal subscription to Computer magazine – free!

4.

Opportunity to subscribe to the complete IEEE Computer Society Digital Library or individual periodicals in your specialty area at the lowest available rates.

5.

Advance notice of more than 150 IEEE Computer Society conferences, symposia, and workshops—plus generous discounts on registration fees.

6.

Discounts on print books, tutorials, conference proceedings, and extended online reference book collections too!

7.

Opportunities to participate in 40+ Technical Committees and over 160 Standards Working Groups.

8.

Membership in the nearest of over 150 local chapters worldwide – free!

9.

Prestigious email alias of [email protected] – free!

10.

Do it today!

Packet Processing in Reconfigurable Hardware,” IEEE Micro, vol. 22, no. 1, Jan. 2002, pp. 66-74.

Be part of the profession and a network of over 100,000 of the best and brightest computing professionals around the world.

www.ieee.org/renewal JANUARY–FEBRUARY 2005

69

Suggest Documents