Transparent Communication Management in Wireless ... - CiteSeerX

52 downloads 1554 Views 461KB Size Report
Comma consists of the required service proxy and a network monitor. ...... Kati implements a graphical user interface which provides a shell into the operation of ...
Transparent Communication Management in Wireless Networks by

David Angus Kidston

A thesis presented to the University of Waterloo in ful lment of the thesis requirement for the degree of Master of Mathematics in Computer Science

Waterloo, Ontario, Canada, 1998

c David Angus Kidston 1998

I hereby declare that I am the sole author of this thesis. I authorize the University of Waterloo to lend this thesis to other institutions or individuals for the purpose of scholarly research.

I further authorize the University of Waterloo to reproduce this thesis by photocopying or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research.

ii

The University of Waterloo requires the signatures of all persons using or photocopying this thesis. Please sign below, and give address and date.

iii

Abstract Wireless networks are characterized by the generally low quality of service (QoS) that they provide. In the face of user mobility between heterogeneous networks, it is understandable that distributed applications designed for the higher and constant QoS of wired networks have diculty operating in such complex environments. Proxy systems provide one solution to this problem. By placing an intermediary on the communication path between wired and wireless hosts, the communication streams passing between the elements of the distributed application can be ltered. This processing can ameliorate wireless heterogeneity by converting the wireless side of the stream to a more appropriate communication protocol, or can reduce bandwidth usage through data ltering. It is up to the application to request and control services at the proxy. This model of control is not always appropriate. Many legacy application designed for the wired environment cannot be modi ed for use with a proxy. Similarly, though proxies can convert from one communication protocol to another at the interception point, this conversion can break the end-to-end semantics of the original communication stream. This thesis explores an alternate proxy-control method, where control of lter services can originate outside the application. This model relies on knowledge of application data and communication protocols to support lters which can make packet-level modi cations that do not compromise the operation of either protocol or application. These new transparent services are controlled externally through a user interface designed for third-party service control. A method for transparent stream control is presented, and a sample implementation for supporting the transparent modi cation of TCP streams is explained. The proxy architecture that was used and partially developed for this thesis is described, examples of the associated lters are given, and the external user-interface system is presented.

iv

Acknowledgements This thesis is the product of input from a wide variety of sources, and I would like to take the opportunity to thank as many of them as I can remember. First o , I would like to thank all the members of the Shoshin research group at the University of Waterloo. They provided the sense of community and angst necessary to motivate me into fashioning and nally nishing this thesis. I would also like to thank several individuals who gave direction to this thesis. My advisor, Jay Black, provided an environment in which I could explore many areas of interest to me, but also kept me grounded and focused with good advice. I pro ted greatly from discussions with Michael Nidd, former Shoshin lab guru, and Marcello Lioy, former fellow Masters student. I would also like to thank Tara Whalen for taking the edge o of Masters work (and life in general) and our two co-op students, Brent Elphick and Michal Ostrowski, who showed me that program implementation can be almost as fun as the design. Thanks for making the lab a welcoming place guys! Finally, I would like to thank all my family and friends who stuck with me through this entire process. By giving me your support, helpful nudges and implied threats you made the time not just rewarding, but incredibly enjoyable. Cheers!

v

Contents 1 Introduction

1

2 Background

5

2.1 Mobile IP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 The Transmission Control Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 The Problem: Wireless Variability . . . . . . . . . . . . . . . . . . . . . . . . . . .

3 Related Work 3.1 3.2 3.3 3.4

Application-Level Solutions Protocol-Level Solutions . . Proxied Solutions . . . . . . Summary . . . . . . . . . .

5 8 9

11 . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

4 Architecture

12 14 16 19

21

4.1 Architecture Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4.2 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5 Service Proxy 5.1 Issues 5.1.1 5.1.2 5.1.3

26

..................... Proxy Mobility . . . . . . . . . . . . The End-to-End Semantics Problem Run-Time Environment . . . . . . . vi

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

26 27 28 28

5.2 Service-Proxy Design . . . . 5.3 Service-Proxy Interface . . . 5.3.1 Command Summary 5.3.2 Interface Example .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

6 Network Monitor

37

6.1 Issues . . . . . . . . . . . . . . . 6.1.1 Data Sources . . . . . . . 6.1.2 Generated Trac . . . . . 6.1.3 Noti cation Method . . . 6.2 Monitor Design . . . . . . . . . . 6.3 EEM Interface . . . . . . . . . . 6.3.1 EEM Variables . . . . . . 6.3.2 EEM-Interface Functions 6.3.3 Interface Example . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

7 Transparent Service Control 7.1 7.2 7.3 7.4

Control Methods Kati Overview . Kati Design . . . Example . . . . .

29 33 34 34

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

38 38 38 39 40 42 42 44 47

49 . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

8 Stream Services

50 52 52 55

58

8.1 Transparency-Support Filters . . . . . . . . . . . . . . 8.1.1 Issues . . . . . . . . . . . . . . . . . . . . . . . 8.1.2 The TCP-Transparency-Support Filter (TTSF) 8.1.3 TTSF Design . . . . . . . . . . . . . . . . . . . 8.1.4 TCP-Speci c Issues . . . . . . . . . . . . . . . 8.1.5 Packet-Dropping Example . . . . . . . . . . . . 8.1.6 Packet-Compression Example . . . . . . . . . . vii

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

59 59 60 62 65 66 68

8.2 Protocol Tuning . . . . . . . . . . . . . . . 8.2.1 Snoop . . . . . . . . . . . . . . . . . 8.2.2 TCP Window-Size Modi cation . . . 8.2.3 The End-to-End Problem Revisited 8.3 Data Manipulation . . . . . . . . . . . . . . 8.3.1 Data Removal . . . . . . . . . . . . 8.3.2 Hierarchical Discard . . . . . . . . . 8.3.3 Data-Type Translation . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

70 71 72 73 74 75 76 76

9 Security Concerns

78

10 Summary and Future Work

80

10.1 Summary . . . . . . . . . . . . . . . . 10.2 Future Work . . . . . . . . . . . . . . 10.2.1 Layered Service Abstraction . . 10.2.2 Operating-System Integration . 10.2.3 Mobility . . . . . . . . . . . . . 10.2.4 Double-Proxy Systems . . . . .

. . . . . .

Bibliography

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

80 83 83 83 84 84

85

viii

List of Tables 3.1 A Comparison of the Work Reviewed . . . . . . . . . . . . . . . . . . . . . . . . . . 19 6.1 6.2 6.3 6.4 6.5 6.6 6.7

SNMP Variables Supported by the EEM . . . . Additional EEM Variables . . . . . . . . . . . . EEM Initialization and Termination Functions EEM ID Functions . . . . . . . . . . . . . . . . EEM Attribute Functions . . . . . . . . . . . . EEM Register Functions . . . . . . . . . . . . . EEM Query Functions . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

43 43 44 45 45 46 46

8.1 Several Data Classes and Methods for Reducing/Compressing Each . . . . . . . . . 74

ix

List of Figures 1.1 Proxy Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

2.1 Triangular Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

4.1 Enhanced-Proxy Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5.1 The Service-Proxy (SP) Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.2 Detail of the SP Filtering Mechanism . . . . . . . . . . . . . . . . . . . . . . . . . . 32 5.3 SP Interface Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 6.1 The Execution Environment Monitor (EEM) Architecture . . . . . . . . . . . . . . 41 6.2 Sample Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 7.1 7.2 7.3 7.4

Main Kati Window . . . . . Xnetload Window . . . . . Adding a Service from Kati New Service Appears . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

53 54 56 57

8.1 8.2 8.3 8.4

TCP Header . . . . . . . . . . . . Transparent TCP-Filter Algorithm Packet Dropping Example . . . . . Packet Compression Example . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

61 63 67 69

x

Chapter 1

Introduction Mobility in computing has shifted from a practical impossibility to a priority. The increasing demand for information access anytime, anywhere has provided an impetus for new investigations into wireless networks. Unfortunately, mobility comes with a corresponding increase in complexity. As a mobile computer moves from location to location, available bandwidth, error rates, and other quality-of-service (QoS) characteristics can change drastically. This kind of variability is virtually unknown in the more common wired networks, where a constant high throughput and low error rate are the norm. This relative stability has been used to great advantage in the creation of tuned networking protocols which use predictive algorithms in their operation. For instance, TCP [23, 26] uses estimations of round-trip time to derive appropriate retransmission timeouts. It can then use this measure to maximize throughput and adapt to variability in the network by sending increasing amounts of data until packets are lost. In a wired network, such losses are most likely caused by congestion resulting from overuse of some portion of the intervening network, and TCP will lower its transmission rate to avoid exacerbating the problem. However, when placed in a wireless environment, TCP will encounter more packet losses from transmission failures and delays associated with mobility than from congestion [4]. By lowering the packet-transmission rate to avoid overloading intermediate nodes, TCP is reacting in the exact opposite of the desired manner. In a wireless medium, lost packets should be 1

CHAPTER 1. INTRODUCTION Wired Host

2

Proxy

Wireless Host

Filters

Figure 1.1: Proxy Architecture retransmitted as soon as possible to allow the transmission window to slide forward. Problems caused by the variability and generally lower QoS provided by the wireless medium are by no means con ned to the network and transport levels. Distributed applications rely on the speed and dependability of wired networks. Design decisions are made assuming certain bandwidth and delay characteristics. For example, applications with strict data-delivery timing, such as real-time audio or video, rely on constant and high QoS from the underlying network. Proxy architectures provide a solution to both protocol- and application-level problems (see Figure 1.1). These architectures assume a network model where one side of the communication is a wired host and the other wireless. The wired host is stationary and has a fast and stable connection to the intervening network. The wireless host is mobile and has a connection quality that is generally lower that the xed host and that can also change over time. The communication stream between the two endpoints is split by a proxy whose purpose is to manage the communication stream in both directions. It does this by ltering the data stream so that the slow link is not overloaded. The proxy might to supply services such as the following.

 Protocol Conversion. By converting to protocols tuned for the wireless medium on that side of the proxy, TCP-style misinterpretations can be avoided.

 Data Reduction.

Applications such as real-time audio and video send time-sensitive data which may be out of date by the time they reach the proxy. If applications can handle missing data, the reduction in wireless-bandwidth usage may improve the timing characteristics of the data arriving at the real-time application on the mobile host.

CHAPTER 1. INTRODUCTION

3

 Data Compression.

With knowledge of current network conditions, the application can request that the proxy vary its level of compression to match the available wireless resources.

 Data Translation.

In some cases, converting to a more compact data format can greatly reduce the required bandwidth of a stream. For instance, images can be converted from colour to monochrome, or text from PostScript to ASCII.

 Support for Partitioned Applications. In some cases, an application may wish to place some

of its decision-making and information-gathering capabilities on the proxy. This can allow processing to continue if the mobile becomes disconnected. The software running on the proxy can also be used as an agent, collecting and pre-formatting data before forwarding the summary to the mobile part of the application.

In contemporary proxy systems, the application controls lter operations since it has in-depth knowledge of its own computation and data streams. However, this model is not appropriate for all cases. Many legacy applications cannot be instrumented for use with a proxy system, because of either a lack of resources or source code. Similarly, there are some applications that are sensitive to end-to-end semantics and cannot make use of the ltering facility o ered by contemporary proxy systems. I argue that communication streams can be modi ed for optimized transmission over wireless networks without the collaboration or knowledge of the distributed application. A proxy architecture can be used to apply transparent services to communication streams, while preserving end-to-end semantics. By using knowledge of network- and application-level protocols, the proxy can be used to interpret the semantic content of data streams, and optimize transmissions on the wireless side to make best use of current network conditions without requiring control by the distributed application. A contemporary, general-purpose proxy system named Comma was extended to give an enhanced architecture. Comma consists of the required service proxy and a network monitor. This proxy architecture was extended with a user interface named Kati. Kati allows users to monitor and control the streams and lters of a proxy, as well as monitor network conditions. This support

CHAPTER 1. INTRODUCTION

4

architecture makes it possible to control a number of transparency-support lters, which can be used in the creation of transparent data- ltering services. The remainder of this thesis is structured as follows. In the following chapter, some background on wireless data networks is presented. This includes an overview of both Mobile IP and TCP, as well as a discussion of the problems encountered in a wireless environment. Chapter 3 describes the related research upon which this thesis is based. Special attention is paid to the di erences between protocol- and application-level services, and how proxy systems can provide a general integrated solution. Chapters 4{7 examine the proxy architecture used to support transparent services. Chapter 4 introduces the architecture, while Chapters 5 and 6 describe the Comma service proxy and network monitor. Chapter 7 explains the Kati shell which was implemented to monitor and control the Comma system. Chapter 8 describes the transparent stream-modi cation scheme in detail. The base lters which have been implemented are described, as are some sample transparent services. Security concerns are addressed in Chapter 9. Finally, the conclusions and some possible directions for future work are explored in Chapter 10.

Chapter 2

Background In order to understand the problems related to supporting distributed applications, some background is needed on the nature of mobility and wireless networks. This chapter looks rst at Mobile IP, an addressing protocol for hosts with non-static connections (mobiles). This is followed by a brief overview of the Transmission Control Protocol (TCP). Although most of the protocol work described in this thesis is general in its applicability, TCP is a widely used reliable transport protocol and is used in many of the examples and sample applications. Finally, there is a brief discussion of the types of problems faced by distributed applications in a wireless environment.

2.1 Mobile IP One of the most dicult issues to deal with, even in a static network, is how to identify where to send packets. The Internet Protocol (IP) provides a method for identifying machines on the Internet. IP addresses are speci ed as a 32-bit integer value which is often broken into four eight-bit numbers for ease of human use. These addresses are used by routers to determine the path on which data packets are to be sent. In static networks, routing tables are created so that inter-network routers can determine 5

CHAPTER 2. BACKGROUND

6

where packets are to be sent next. This allows packets to be shuttled from one network to another until it nally reaches the destination network, and from there, the addressed host. In mobile networks, this model does not work. Since mobile machines can switch from one network access point to another, static routing would be continuously out of date. Mobiles which happen to be away from their \home" network would receive no trac at all. Mobile IP [20] was created to deal with this issue. Mobile IP is basically a packet-forwarding protocol that allows mobile hosts to change access points, and yet continue to receive uninterrupted packet streams from anywhere else in the Internet. Mobile IP is made up of three main entities: besides the Mobile Host (mobile), there is a Home Agent (HA) and a Foreign Agent (FA). The architecture is described below. The mobile is simply a computer whose access point to the wired network may change. Mobiles have a home network from which they base their operation. The home network is chosen at the same time as the permanent address of the mobile to ensure that the required Mobile IP software will be running in this sensitive location. The current location of the mobile is registered with the HA, usually through the mobile's current FA. The HA is the forwarding host on the mobile's home network. This machine intercepts traf c bound for any mobile that has registered with the HA. Packets are encapsulated using IP tunneling [25], and sent to the currently-registered location of the mobile. Encapsulation takes an IP packet and places it as data inside another IP packet. The process essentially involves placing a new IP header before the original packet. The HA uses the registered care-of-address of the mobile's FA as the destination and its own address as the source address. The FA is the forwarding host at the mobile's current network. The foreign agent registers its address with the HA as the mobile's current care-of-address. In this way, the FA receives the packets forwarded from the HA and bound for the mobile. The FA then decapsulates the forwarded packet and pass it on to the appropriate mobile. It is up to the mobile to register with the local FA when it enters a new network. Mobiles use the Internet Control Message Protocol (ICMP) to discover routers and FAs in their current local network. It is possible for the mobile to be its own FA, but this requires that the mobile be capable of changing addresses to t with

CHAPTER 2. BACKGROUND

7

Internet Recipient

Home Agent

Foreign Mobile Agent

Figure 2.1: Triangular Routing any network to which it happens to be connected. The Internet Control Message Protocol [22], is a generalized method for passing information about network state between hosts. Of most interest to Mobile IP are the the Router Discovery messages [6], which are used to determine the addresses of local routers. Internet routing depends on these messages to provide machines on a network with a place to send packets rst. The Router Discovery messages of interest to mobiles are the router-solicitation and the routeradvertisement messages. Router-solicitation messages are generated by hosts seeking a router, and are only sent if it is determined that the previous router is no longer available. For xed hosts, the default router is determined from a con guration le on initialization. Router-advertisement messages are generated by routers to respond to router solicitations and are also generated periodically to inform local machines that the router is still available. These messages are used by mobiles to discover routers and FAs when they have moved their access point to a new network. As e ective as Mobile IP is in handling routing in a dynamic environment, there are two major draw-backs in its approach. The rst is the e ect known as triangular routing (see Figure 2.1). This arises because all trac bound for the mobile must be routed through the home agent. Even if the mobile is very close to the host communicating with it, packets are routed through

CHAPTER 2. BACKGROUND

8

a possibly very distant HA. On the other hand, trac from the mobile is sent directly to its recipient. A proposed solution for this problem [21] is to create a binding cache on the recipient's home network, which caches the most recent location of the mobile. Packets can then bypass the HA by being forwarded directly to the FA at the mobile's current location. The problem with this approach is that these binding caches must be placed on all static hosts, as opposed to the current scheme where changes are localized to wireless subnetworks. The second drawback to the Mobile IP approach comes from the delay in updating the HA after the mobile has moved to a new network. The period and actions required for a mobile to move from one network to another are known as hand-o . There will be a period of time after the hand-o where packets arrive at the old FA and not the new one. Even though the mobile may update the HA right after the hand-o , all packets in transit to the old FA and those transmitted from the HA before the new registration reaches it will arrive mistakenly at the old FA. These packets may either be dropped by the FA, relying on higher-level communication protocols to handle the loss, or they can be forwarded to the new FA. Forwarding is not always an appropriate solution, since forwarding from one network to another may incur signi cant delays, causing packets to be considered lost.

2.2 The Transmission Control Protocol The Transmission Control Protocol [23, 26], more commonly known as TCP, is the most widelyused reliable transport protocol. In fact, TCP has become a de facto standard for use on the Internet. TCP provides a connection-oriented, end-to-end communication service which guarantees reliable and in-order delivery of data. TCP achieves this, despite its own use of an unreliable datagram service, by use of a sliding-window acknowledgement scheme. In this acknowledgement scheme, all data sent between peer communicating processes is acknowledged. During connection setup, a transmission-window size is negotiated. This determines the amount of data that can be left unacknowledged on the network. The sender maintains a send window size which shrinks as it sends more data. The receiver acknowledges receipt of data and

CHAPTER 2. BACKGROUND

9

declares the amount of data it is willing to receive in its receive window. The sender will never send more data than that advertised by the receive window. In order to determine if a data segment has been lost, TCP calculates how long it should take for the acknowledgement of a packet to arrive. If no acknowledgment has arrived in this time plus twice the expected standard deviation, the data segment is considered lost. TCP calculates this timeout value by keeping running averages of the delay between sending a packet and receiving its acknowledgement. This allows TCP to adapt the timeout value to changing network conditions. TCP assumes that loss of data segments results from congestion in the intervening network. In contemporary wired networks, this is a valid assumption since a packet is rarely lost except when it is discarded at a node with insucient memory to bu er it. In order to restabilize the network and avoid congestive collapse [9], TCP initiates congestion-control and -avoidance mechanisms. First, the transmission-window size is reduced, and is only increased subsequently according to a slow-start mechanism. Finally, the retransmit-timeout value is doubled for each subsequent timeout of the same data segment until some threshold is met. This mechanism is known as exponential backo . Improvements to this congestion-avoidance algorithm called fast retransmit and fast recovery were later proposed in [10]. In TCP, when a packet arrives at the receiver out of order, an immediate acknowledgement (ACK) is send to the sender indicating what sequence number is missing. If several of these ACKs arrive at the sender, it is an indication that the packet has been lost, but that congestion is not critical. Under fast retransmit, the missing packet is resent immediately. Fast recovery requires that the send window be shrunk, but slow-start is not performed.

2.3 The Problem: Wireless Variability Just as Mobile IP provides a solution for mobility in wireless networks, a solution is required to deal with the variability of the wireless environment in the face of such mobility. Such solutions can be divided into two distinct areas: protocol solutions and application solutions. Since most modern operating systems make a distinction between kernel and user space, this distinction is

CHAPTER 2. BACKGROUND

10

mirrored in this thesis. While programmers may have access to application-level functionality, protocols that lie below the socket layer are not usually accessible. To examine the e ect of wireless networks on the protocol layer, consider TCP. As discussed above, TCP assumes that packet losses result from congestion. This is a valid assumption as long as error rates remain low and the throughput remains high. However, in a wireless environment, packet losses are more likely to result from transmission errors, or from delay when a mobile executes a hand-o to a new access-point. This misinterpretation by TCP causes the protocol to slow its transmission rate when it should in fact be retransmitting the lost packet as soon as possible. Like TCP, all communication protocols have been built with underlying assumptions about the behaviour of the layers below them. Some of these assumptions have been invalidated by the unforeseen shift to a wireless environment. These protocols now need to be compatible with both static wired networks, for interoperability and legacy reasons, and with the variable wireless network. At the application layer, variability in the QoS o ered by the wireless network can cause even more complex problems. Just as TCP was built with assumptions about the underlying media, applications are built with similar dependencies on the protocol layers below. If the requirements of the application cannot now be met in the wireless environment, its operation may su er or it may not function at all. For example, real-time audio and video clients are built assuming certain bandwidth and delay characteristics. In a wireless environment, it is unlikely that bandwidth will be sucient, and packet loss and retransmission will cause variable delays, throwing o any client's packet-handling mechanism. The following chapter discusses related research, which has proposed solutions to the problem of wireless variability, from link-layer packet-transmission strategies to adaptive application object models.

Chapter 3

Related Work As mobile computers move from location to location, they can encounter a wide range of communication environments. For instance, they may change from a direct wired connection at a user's desk, to a low-quality wireless link at the co ee shop down the street. Both communication protocols and distributed applications that have been designed and tested in the wired environment are impaired in their operation by the unexpected variability in the transport medium. This chapter presents a variety of application- and protocol-level solutions to network heterogeneity. The work presented has been evaluated against the following criteria.

 Protocol Transparency : Solutions should not interfere with the operation of the wired portions of the network.

 Application Transparency : There should be only minimal changes to existing applications, if any at all.

 General Applicability : Solutions should not be con ned to single domain. Solutions should be applicable in many di erent application areas.

Protocol transparency is important because of the nature of standardized communication protocols. Since these protocols are developed and placed within the OS, beyond the reach of the average programmer, substantial time and e ort are required to create a consensus of what these 11

CHAPTER 3. RELATED WORK

12

protocols should be. This makes it unlikely that the protocol requirements of a still-relativelysmall wireless community will be met in the near future. Another argument is that since the vast majority of wired hosts will never need to deal with mobiles, why should they have to deal with the added complexity of wireless protocols? These arguments have led to the criterion that wireless solutions should be localized to areas that are involved directly with wireless operation. This is one of the reasons why triangular routing is an unfortunate necessity in Mobile IP. Application transparency was chosen as a criterion for similar reasons. Applications involve a large outlay of resources for the company that produces them. Companies and programmers will be understandably reluctant to duplicate their original e orts if solutions which do not require this are available. The other argument for application transparency comes from the nature of legacy applications. Because of the existing large code base, changes to such applications would be in some cases expensive, and in others, a lack of original source code might make it impossible. General applicability, the nal criterion, was chosen in an attempt to select widely applicable solutions. Instead of devising a single mechanism for each application area or program, solutions should be able to deal with the widest possible variety of problems in the wireless environment. The di erent types of solutions presented here can be divided into three approaches. The rst approach supports the mobile applications themselves, either by providing infrastructure to mitigate wireless-network e ects, or providing a toolkit for creating new adaptive applications. A second approach o ers protocol-level solutions where the nature of the wireless link is hidden as simply a low-bandwidth extension of the network, and errors are hidden by a wireless-speci c network- or link-level protocol. Finally, the third approach splits the network into wireless and wired portions and places a proxy between them. The proxy services the communication stream by manipulating or ltering the data and protocols that pass between them.

3.1 Application-Level Solutions One way to improve wireless-communication performance is to exploit a support architecture for applications. These architectures provide applications with methods for handling the variability

CHAPTER 3. RELATED WORK

13

inherent in wireless communication. The Coda le system [24] provides special le-access services applicable when disconnected or only weakly connected. Rover [11, 12] and WIT [28, 29] are two object-based adaptive-application architectures. Coda is one of the earliest mobile-application support mechanisms, and is based on a lesystem approach. Coda demonstrated that a Unix-style le system can be maintained in a weakly connected or disconnected environment. This is made possible by a variety of replication, letransaction and cache-management optimizations. The use of hoarding (user-assisted cache management) combined with le-update logging and reintegration schemes allows fully disconnected users to interact with local copies of remote les. When weakly connected, Coda provides rapid cache validation and a trickle reintegration scheme with optimistic concurrency control. Coda showed that database-style methods could improve performance in an environment with at-best weak connectivity. Transaction caching and message queueing were shown to increase the reliability and decrease the response time of the related application. However, using remote les as a communication method is not appropriate for all applications (e.g., streaming video). The Rover toolkit provides a mechanism for creating new adaptive applications. The toolkit is based on a distributed-object system consisting of relocatable dynamic objects (RDO) which communicate by the use of queued remote procedure calls (QRPC). RDOs consist of application data which can migrate at run-time between the mobile client and wired server, depending on current network conditions. QRPC, similar to Coda, queues remote procedure calls from the mobile client, bu ering messages until network conditions allow for their transmission. The Rover system also provides the support mechanisms for transporting RDOs between the client and server, and for object caching. This system provides a comprehensive method for the production of adaptive and partitioned mobile applications. However, the system relies on the programmer to rewrite applications in order to exploit the object model. Considering the complexity of some applications, reducing bandwidth by the use of intelligent partitioning may not be worth the e ort. WIT [28, 29] is another adaptive application-support architecture that uses objects to create partitioned applications. In WIT, the data and functions of the application are partitioned into

CHAPTER 3. RELATED WORK

14

hyperobjects which can migrate across the wireless link. Applications are built by de ning the operations and relationships between hyperobjects. This linked structure allows the underlying system to understand a level of semantic structure of the application. Combined with observed access patterns, the system can make informed policy decisions of which data/objects should be cached, prefetched, and if necessary, which subset of the data/objects should be migrated to a new location. The WIT project has identi ed a number of techniques for optimizing communication, including caching, prefetching, data encoding, lazy evaluation, partial evaluation and data reduction. However, the techniques proposed by the system require detailed knowledge of the program domain, as well as re-designing and re-writing applications from scratch as in the Rover model. Both WIT and Rover satisfy the general applicability goal, but fail in application transparency. Although these application-support architectures make it possible to create adaptive applications which work well in the wireless environment, it would be too complex and costly to re-design and re-write such applications. Application-level solutions show that application adaptability can greatly improve application performance. By giving the application more control over how its data is communicated, and where the computation is done, much of the variability of the wireless medium can be circumvented.

3.2 Protocol-Level Solutions Another approach to improvingthe performance of wireless networks is to hide the varying network QoS from applications. The motivation for this view is that since the problem is con ned to a single point (the wireless link), the solution should be local as well. Solutions that take this approach attempt to make the wireless link appear simply as a low-bandwidth extension of the network. This can take the form of split-connection approaches such as I-TCP [2], or TCP-aware link-layer protocols such as Snoop [3, 4]. On-the- y modi cations of the underlying protocols can provide wireless-speci c services, as shown by BSSP [17]. I-TCP is an indirect transport-layer protocol which replaces a TCP connection with a split

CHAPTER 3. RELATED WORK

15

connection: a normal TCP connection between the xed host and the Mobility Support Router (MSR) and a wireless-speci c connection from the MSR to the mobile host. The MSR is a router on the wired network between the sender and receiver. By splitting the connection, the special requirements of the mobile link can be accommodated in the separate connection to the mobile, while the remaining connection is backwards-compatible with the existing xed network. I-TCP is mainly concerned with separating ow control from congestion control. Special transport protocols support event noti cation to the application or a partitioned application running on the MSR. This protocol is the simplest of the improved transport protocols, using a proxy to handle the conversion from one protocol to another. It provides the desired application-level transparency and applicability requirements. However, there are problems with protocol transparency. New wireless protocols must be supported at both the MSR and mobile. Also, the immediate acknowledgment of packets arriving at the MSR from the wired network breaks TCP end-to-end semantics. This can result in the possibly catastrophic position where the sender has received acknowledgment of data which has not yet reached the mobile. Snoop is a link-layer protocol that includes knowledge of the higher-layer transport protocol, TCP. In simpler link-layer protocols such as AIRMAIL [1], error-correction techniques such as forward error correction (FEC), and automatic repeat request (ARQ) retransmissions are used across the wireless link. Despite the increase in throughput achieved by this method, transport-level protocols may be confused by duplicate acknowledgments from packets that have been retransmitted, causing the sender to \fast retransmit" a packet that has already arrived at the mobile. Snoop, however, suppresses duplicate acknowledgements and keeps track of which segments have been successfully passed to the mobile. This protocol takes a slightly lower-level view of the wireless-network problem, and succeeds in mitigating the e ects of errors in wireless networks by using error correction and transparent protocol improvements. It also satis es the application-transparency requirements. However, Snoop is tuned for a single protocol, TCP. The model presented in this thesis provides methods to alter any protocol similarly so as to make more e ective use of the wireless link. Section 8.2.1

CHAPTER 3. RELATED WORK

16

discusses this method in more detail. The base station service protocol (BSSP) allows a base station to provide additional services to mobile applications using TCP. The two main services o ered are disconnection-management and a stream-prioritization scheme. Both services change the window size in the TCP header of packets intercepted at the base station. For the disconnection-management scheme, the base station sends \zero window-size messages" (ZWSMs) to the wired sender. The base station creates ZWSMs by setting the receive-window size to zero so that the connection will stall on the sending side as it waits for the window to open. The base station re-opens the window when the mobile reconnects. This allows the serviced stream to stay alive inde nitely and restart faster than if no ZWSM were used and the sender had begun congestion-control and -avoidance mechanisms. The prioritization scheme reduces the advertised window size of all low-priority streams. This forces them to send more slowly as the window lls sooner, allowing priority streams more bandwidth and smaller delay. Section 8.2.2 discusses this method in more detail. This scheme satis es both protocol- and application-transparency requirements, but its applicability is limited to mobile applications which use TCP. This method has been adopted in my proxy model as a type of protocol-level service. By allowing the protocol header of intercepted packets to be changed, the protocol can be altered beyond its initial speci cation and provide new services for mobile applications. Protocol-level solutions show that application-independent improvements to communication performance are not only possible, but highly e ective. It also points to the potential bene t of the use of a proxy within to modify communication streams to handle wireless links more e ectively.

3.3 Proxied Solutions The third approach for improving wireless communication involves the use of a proxy to split the network into wireless and wired portions. The proxy acts as a gateway to the wireless portion of the network and performs a variety of tasks to improve the perceived quality of the network.

CHAPTER 3. RELATED WORK

17

TranSend [7] provides a distillation proxy that reduces the data sent to a mobile application by compressing the data stream. MOWGLI [14, 16] provides a modi ed socket interface that uses a proxied architecture similar to I-TCP. Finally, Zenel [30] describes a general-purpose proxy architecture similar to the one proposed by this thesis. The TranSend proxy server (previously named Pythia) distills information sent from the proxy to the mobile host. Distillation involves data-type-speci c lossy compression such that the semantic content remains, while the size is greatly reduced. As long as the data-type is known in advance, the bandwidth required be greatly reduced by data-type-speci c lossy compression. TranSend also allows users to re ne the resulting data object and request more detail on portions of the object that interest them. For instance, if the distilled object were a picture, the user could select an area of the picture for TranSend to \zoom" in on and give greater resolution, number of colours, etc. The project also looks closely at what user interaction is most appropriate for this type of methodology. This proxy architecture shows that lossy compression and user-speci ed re nement can greatly reduce transmission times and bandwidth utilization. It satis es protocol transparency, but every application and their associated proxy must be designed individually. Currently, it has only been implemented speci cally for a web browser, which also had to be re-written to make use of the proxy. The MOWGLI architecture provides a socket API which is similar to Berkeley sockets, but splits the connection into two parts with a store-and-forward-style interceptor/proxy called the Mobile-Connection Host (MCH). Similar to I-TCP, the connection uses standard wired protocols on the wired side, while the wireless side uses wireless-speci c protocols. MOWGLI also includes a virtual socket layer on which new mobile-aware applications can be created. This layer allows the mobile client to communicate with the MCH proxy to delegate communication and processing tasks. The proxy can also perform some enhanced operations for the mobile application, such as enhancing fault tolerance by bu ering communication. The socket interface can also give feedback to applications about current network conditions.

CHAPTER 3. RELATED WORK

18

The MOWGLI architecture o ers more exibility than TranSend, but su ers from the same limitations as all split-protocol approaches. Partial application transparency is maintained since applications only need to be recompiled with the compatible new type of sockets. Protocol transparency su ers with the problems associated with breaking end-to-end semantics like I-TCP. Similarly, mobiles must be able to handle the wireless protocol used by MOWGLI. Any application which uses sockets at its communication method can make use of this architecture. Zenel's proxy mechanism aims to a be a truly general stream-processing proxy system. The Proxy Server provides an execution environment for ltering code, which can be either native to the Server, or downloaded from a repository on a mobile or wired host. Filters are conceptually small applications themselves, and can drop, delay or transform data moving to and from the mobile host. Filters can run either on data streams using a High-Level proxy, or on individual packets using a Low-Level proxy. This distinction was made because modern operating systems make a distinction between application-layer protocols, and those that come below (transport/network). Their architecture also includes a mechanism for ensuring that all packets bound for a mobile pass through the Proxy (through the use of a modi ed version of Mobile IP) and a lter-control mechanism which allows lters to be noti ed of a limited set of network statistics. This mechanism describes the true potential of a generalized proxy ltering scheme. Arbitrary code may be executed on the Proxy Server, allowing for a complete range of alterations to the data stream, from altering the communication protocol, to managing the data, to partitioning the application. Note, however, that applications must be re-written to request and control the service lters. Proxied solutions allow potentially arbitrary manipulation of communication streams that include wireless links on the wired network. This means that applications can control their communication intelligently before it is sent over the wireless link, the most likely bottleneck in the communication path.

CHAPTER 3. RELATED WORK Project Name

Coda Rover WIT I-TCP Snoop BSSP TranSend MOWGLI Columbia

19

Protocol Application General Transparency Transparency Applicability

Yes Yes Yes No Yes Yes No No No

Yes No No Yes Yes Yes No No No

No Yes Yes No No No No No Yes

Table 3.1: A Comparison of the Work Reviewed

3.4 Summary This section has reviewed a wide range of proposals for helping applications handle the heterogeneity of wireless networks (see Table 3.1). High-level work focused on how to make applications adaptive to the underlying communication variability. Handling variability through the le system gives a high level of transparency, but is not appropriate for all types of communication. Adaptive application toolkits provide protocol transparency and wide applicability, but the applications must be re-designed and re-written at an incremental cost of time and e ort. Low-level work has focused on hiding variability by using protocols tuned for wireless links. Though they provide application- and protocol-level transparency, such changes are often tied to a single protocol, in most cases TCP. TCP can be split into wired and wireless halves with improved throughput at the link-layer, but at the cost of end-to-end semantics. Additional wireless-speci c services can be added on top of TCP through packet header manipulation. Proxy architectures can potentially provide both protocol and application transparency, and can be applied to most application areas. Proxies can be used to distill data for use in speci c applications, or to create a wireless-compatible socket-level abstraction with split wired and wireless protocols. General-purpose proxies allow for broad packet and data-stream manipulations. Because of the exibility and transparency made possible by proxy architectures, this approach was selected for the creation of a communicationmanager for mobile applications (named Comma).

CHAPTER 3. RELATED WORK

20

This architecture has now been extended with an implementation of a user interface named Kati. By adding a method for third parties to monitor and control protocol services, the door was opened for transparent service control. An overview of the design and operation of this enhanced architecture is presented in the following chapter.

Chapter 4

Architecture In order to deal with network variability, I have chosen to use a proxy architecture to provide adaptive stream services. Contemporary proxy architectures operate through the use of an intermediary. The intermediary is placed within the communication stream between the wired and wireless portions of distributed applications so that the stream itself can be processed or ltered. The nature of the processing depends on the application and protocols to be serviced, but usually involves either protocol translation (using a wireless protocol on the wireless side of the connection) or data reduction (through data removal, hierarchical discard, or data-type translation). There are many advantages of using a proxy architecture to manipulate communicationstreams.

 Protocol-Level Control : Since the granularity of the stream being intercepted can be as low as the packets themselves, the communication protocols being used can be manipulated or changed as required. The end-to-end semantic problem introduced by split stream processing can be handled by careful design and the use of special control packets.

 Application-Level Control : Since all data is made available by stream interception, applications become partitioned by placing stream-manipulation code on the proxy. The code can modify the data stream to increase performance. 21

CHAPTER 4. ARCHITECTURE

22

 Wide Applicability : The execution environment within the proxy, which runs stream-manipulation lters, provides applicability to multiple program domains and multiple types of best-e ort networks. Filters may then created for most eventualities from application to hardware.

 Single-Point Control : Since the proxy provides a point from which all packets can be seen,

a new tool emerges from which several advantages can be gleaned. Users can use this wellknown point of control to make service requests. Applications need only communicate with a single administrative point. Filter code can be sure to collect all trac and use it to adapt to current network conditions.

The drawback of these systems is that the services o ered can only be deployed and controlled by the application. Services are de ned as the stream behaviour elicited through the packet ltering provided by a set one or more complementary proxy lters. When it comes to legacy applications which cannot be altered, services must be controlled through some other mechanism. This mechanism is provided through a user level interface named Kati.

4.1 Architecture Overview In order to provide a feature-rich proxied system as described above, an architecture was developed that consists of three main components.

 A communication-modi cation mechanism that provides the necessary packet-interception and processing facilities to constitute a viable stream-processing platform.

 A network-monitoring mechanism that provides mobile applications and lters with networkenvironment metrics. These statistics can be used to determine behaviour and so adapt to available network quality and resources.

 A service-control mechanism, a new component, allows external control of the service proxy. It takes the form of a user interface to the streams and services available at a particular service proxy. Mobile users may add and remove services to streams passing through the service proxy.

CHAPTER 4. ARCHITECTURE

23

Client Application EXECUTION - ENVIRONMENT

Mobile Host Kati

Protected Data Area

Filtering Mechanism

Exception Handler

Packet Interception Module

Filter Management Module

Service Proxy

MONITOR Server Application

Wired Host

Figure 4.1: Enhanced-Proxy Architecture The combined inability of applications to adapt to a varying execution environment and the poor performance of communication protocols in a mobile environment led to the development of a mobile application support architecture called the Communication Manager for Mobile Applications (Comma) [13]. (See Figure 4.1.) Comma enables adaptive applications by providing methods for execution-environment monitoring, and protocol and data-stream manipulation. Comma Service Proxies (SPs) provide the ability to modify communication streams that travel to and from the mobile host. Packets are intercepted by the Packet Interception Module and passed to the appropriate stream-service code, organized into lters. These lters can then alter the header and content of the packet before reinjecting it onto the network. This allows applications to be partitioned, communication protocols to be modi ed transparently, and generalized services to be o ered to packet-based communication streams. The Comma execution-environment monitor (EEM) provides an e ective and extensible network monitor. EEM clients run as user-level threads which can form part of an application or even of SP lters. The client thread communicates with each EEM server in which the application or lter has registered an interest. EEM server daemons can be run on any wired or wireless host. They gather local network and machine statistics and pass this information to any interested client. Such information is either stored in the EEM-client Protected Data Area or communi-

CHAPTER 4. ARCHITECTURE

24

cated directly to the application by the use of the Exception Handler. The EEM server has been designed with a modularized query mechanism. This allows application designers to extend the EEM to monitor a host in a way speci c to an application. In order to co-ordinate the previous two mechanisms and allow external control and monitoring, a third mechanism has since been developed. The user shell, which I have called Kati, provides the user with an interface to the operation of the SPs and the EEM Servers. Kati has three main functions. Its primary role is as a monitoring tool. Kati enables direct observation of executiontime statistics through its interface with the EEM Servers. It also monitors the operation of the SPs, indicating which streams are currently active, which lters are currently being applied to each stream, and which lters are available for use by a particular SP. Kati can also be used as a debugging tool by monitoring application interaction with execution measures and SP lters. Finally, Kati is an interactive-control tool. From the console, services for individual streams can be requested or removed. Applications can make use of these services through the use of a library interface.

4.2 Thesis Organization The following four chapters describe the design and implementation of this architecture. This design has been broken into the following areas: 1. Service Proxy. Stream processing is performed by lters running on the Service Proxy. A detailed description of the design and operation of the interception and lter-execution environment is given in Chapter 5. 2. Network Monitor. Adaptive services require some mechanism that allows them to gather information about their execution environment. A lter- and application-monitoring aid is described in Chapter 6. 3. Transparent Service Control. In order to support lters which do not require applicationlevel control, a third-party service-control mechanism (Kati) was developed. This user-level service-monitoring and control mechanism is presented in Chapter 7.

CHAPTER 4. ARCHITECTURE

25

4. Stream Services. Transparent services require protocol-level support lters. Such a lter has been developed for TCP, and is explained in Chapter 8 along with a number of lters whose services would be complementary to such a system. Each chapter gives a brief overview of the respective interfaces and an example of their use.

Chapter 5

Service Proxy To support communication management with a proxy, methods for intercepting and then modifying communication streams are required. The proxy system used for this research was the Comma Service Proxy (SP), developed at the University of Waterloo [13]. The SP provides packet-level interception on a designated host. Packets are intercepted and passed to lter code which matches the key of the associated communication stream. Filter code gains access to the full packet, and can alter the protocol headers and content of the packet. This allows applications to be partitioned, communication protocols to be modi ed, and generalized services to be o ered to data streams. Section 5.1 gives a brief overview of the issues and design decisions in the creation of the Comma SP, followed by a detailed description of its design and operation in Section 5.2. Section 5.3 includes a brief overview of the interface to server operation, and an example of its use concludes the chapter in Section 5.4. Security concerns raised by this design are discussed in Chapter 9.

5.1 Issues Service proxies are made up of two main components. A stream-interception component is required to remove all related packets from the network and pass it to the appropriate service code. The 26

CHAPTER 5. SERVICE PROXY

27

service-execution environment enables lters to execute packet-processing algorithms on stream data and submit the modi ed packet for re-insertion onto the network. Several design decisions must be made when creating a proxy server; these are covered in the next three sub-sections.

5.1.1 Proxy Mobility Stream interception is a dicult problem in itself. The packetized nature of modern network communication can cause individual packets of the same stream to take di erent routes, depending on the ever-changing state of the underlying network. To intercept the full stream successfully every packet must be intercepted. This is necessary to fully interpret and service application data and communication protocols. The proxy must therefore be placed at a routing bottleneck. The most obvious choice is to place the proxy at the interface between wired and local wireless networks. This is a natural bottleneck where packets bound for the mobile are queued for transmission on the much slower wireless network. The problem, however, is to force all trac to pass through this particular entry-point. Several options are available. One is to require that each wireless network have a single wired attachment which also serves as the interception point. Another possibility is to tie the routing of packets bound to and from the wireless network to a single point on the intervening network. As proposed individually by Lioy [17] and Zenel [30], it may be possible to use the foreign agent (FA) of Mobile IP as the desired gateway. Since all trac is forwarded to the FA before being decapsulated and sent on to the mobile, it could be combined with the proxy to provide both mobility and application/protocol services. At the moment, the Comma SP uses the simpler \forced" method. However, as our implementation develops, the interception point will eventually be merged with an implementation of Mobile IP and incorporated into the operation of the FA. This problem is left as future work.

CHAPTER 5. SERVICE PROXY

28

5.1.2 The End-to-End Semantics Problem One of the problems of current proxy systems has to do with the way in which the proxy inserts lters. Filter insertion to date (for instance [2, 30]) has involved rst splitting the existing communication stream into two separate streams and then connecting ends of the new streams with the corresponding input and output interfaces of the lter being inserted. This split-connection approach leads to what could be a potentially dangerous violation of transport-level end-to-end semantics. Since the two streams work separately from each other, data sent on the wired rst half of the connection may be acknowledged by the proxy before the corresponding data has reached the nal destination on the second half of the connection. This may lead to the position where the rst half of the connection has closed while the second half still struggles to get the last pieces of data across. Problems then arise if an error occurs and the sender needs to be noti ed. An alternate proxy mechanism does not split the connection, but instead provides mechanisms by which lters can act directly as protocol- and data-level converters to existing data streams. Data streams are interpreted at the packet level so that packet headers and data can be changed, but the semantics of the exchange are not modi ed. This method was chosen for this thesis and is explained in more detail in Chapter 8.

5.1.3 Run-Time Environment In order to run service lters, an execution environment for those lters is required. The purpose of this environment is to limit the interaction of the lter with sensitive resources on its host machine. The run-time access of the lters determines not only the degree of trust that must be placed in services performed on the proxy, but also the capabilities of the lters themselves. There are two alternative types of environments available, interpretive environments and binary environments. In interpretive environments, lters are run within the proxy using an interpreter such as the Java interpreter. Filters are compiled into machine code, loaded into the proxy, veri ed in some way and executed on a virtual machine. The main advantages of this approach are portability

CHAPTER 5. SERVICE PROXY

29

and security. Because of the interpreted nature of the lters, they are portable to any machine that supports the interpreter itself. In the case of Java, which prides itself on its \write once, run anywhere" slogan, this can be a large percentage of the hosts of interest. Also, the interpretive environment can provide security guarantees about the use of machine resources. Most interpreted languages argue that the use of virtual-machine instructions allows for much greater security and control of code. The main disadvantage of interpreted environments is the speed of execution. Filters may be unable to process packets fast enough to deal with real-time trac. This problem may disappear with improvements in interpreters and hardware. In binary environments, lters must be compiled for the speci c host architecture on which they are to be run. Filters then are loaded directly into the execution space of the proxy and run as part of the proxy process. The main advantage of this approach is execution speed, since data processing is run directly in machine instructions. This method does however lead to problems with security and portability. Compiled lters have access to all system calls and even unintentional errors may compromise the system on which a lter is running. Also, since the lter is compiled into machine-speci c instructions, lters can only be loaded into proxies running on similar architectures. The binary environment was chosen for the implementation of the Comma SP. This was done mainly for speed of implementation. A dynamic loading facility (the \dl" library) is used to load lters at run time. Security issues arising from this proxy systems are covered in Chapter 9.

5.2 Service-Proxy Design The SP provides a mechanism for ltering packets bound to or from a mobile host. This single mechanism can be used to implement three classes of wireless services. First, a service lter can include part of the code of an application, resulting in application partitioning. Although not originally implemented for the purpose, this mechanism would be appropriate for dynamic object migration as shown by M-Mail [18]. Second, it can be used for data- ltering purposes, such as web-page compression [7] or DNS prefetching [27]. Third, the mechanism supports various types

CHAPTER 5. SERVICE PROXY

30

Filter-Management Module outgoing packet

Stream Registry k’

k

i m

PacketIntercept Module

k’’

i m

...

k

~k k incoming packet

Source Destination Keys

Individual Filter (4 methods)

Figure 5.1: The Service-Proxy (SP) Architecture of protocol modi cation such as Snoop [4] and BSSP [17]. Currently, the SP is only capable of handling TCP packets, though the design will eventually extended to handle other transport level protocols. The SP design has four main components: packet interception, which removes packets from the network and matches each packet with a set of requested services; lter management, which assigns lters to new packet streams as well as handling the dynamic addition and removal of lters from the lter pool; lter accounting, which keeps track of packet streams and the services applied to these streams; and, of course, the lters themselves. This architecture is shown in Figure 5.1. In order to manipulate packets at the SP, we have designed a ltering mechanism that takes a packet from the network, matches this packet with a set of lters, and then passes the packet to those lters for servicing. In order to identify communication streams uniquely, lters are associated with packet keys. A key consists of an ordered quadruple consisting of the source IP address and port, and the destination IP address and port. Together, these four uniquely identify a stream. Note that this implies that streams are directional. Most streams have an associated

CHAPTER 5. SERVICE PROXY

31

stream in the reverse direction which would have a key with the source and destination numbers reversed. Though this key may not remain unique over time, it provides a unique identi er during its lifetime. It is up to the application, or to a user of Kati, to specify which lters should be applied to which stream keys. In order to allow a lter to match multiple streams, portions of the key can be left blank, creating a \wild-card" key. A match is made if all but the blank portions of the wild-card key match the stream key. For instance, a wild-card key for a certain lter may give the destination IP address as the IP address of the mobile, and leave the rest blank. Then, all streams bound for any port on the mobile host will match. Also, because certain protocols have been assigned static port numbers, wild-card keys can be used to match speci c protocols easily. Filter management keeps track of the lters currently available and the keys associated with them. New lter{key bindings can be requested by the application or by mobile users using Kati. This process adds the key into the stream registry and associates it with the desired lter and any parameters for the lter included in the registration. The lters themselves are kept in a lter pool and can be compiled into the SP as one of a standard set of services or loaded dynamically during the operation of the SP. When a new packet reaches the SP, it is intercepted and presented to the packet-detection module for inspection. If the stream registry does not contain an entry for the exact key, then this is the rst packet of a new stream, and a \ lter queue" for this stream must be created. A lter queue is conceptually a double queue of lter methods, an in and an out queue. The purpose of the in queue is to allow all lters to read the packet before any modi cations are made. The out queue gives lters the ability to change packet contents and headers, possibly overwriting the changes of packets with lower priority. The packet is rst passed to the top in method of the in queue, then down to the second, and so on to the bottom in method (see Figure 5.2). In methods are allowed to read but not modify the packet. The packet is then passed to the bottom out lter method. This is the rst method that can modify the packet. From there, the packet is passed to the second-last out method, which can change the packet, potentially overwriting the modi cations of the previous lter. The

CHAPTER 5. SERVICE PROXY

32

incomming packet

outgoing packet

k

k

i

j

k

m

Packet Intercept Module

key match

Filter Queue (for key k) one filter (2 methods/key) in filter queue

out filter queue

Figure 5.2: Detail of the SP Filtering Mechanism packet is then passed up the out queue until all lter methods have had their chance to modify the packet. If the packet has not been dropped completely, the resulting packet is reinjected into the network. A lter queue is built by creating a new instantiation of each lter object in the stream registry whose associated wild-card key matches the packet key and ordering their methods into lter queues. Every lter has an insertion method associated with it which matches its other internal methods to either the in or out portion of a lter queue on a speci c key. Usually, the lter will use the key of the packet which caused the insertion method to be called, but it may add methods to other keys as well. It is quite common for the lter to add methods in the reverse direction of the stream, for example. Potentially, lters may add methods to completely unrelated streams. For example, if a lter wanted to monitor all the TCP streams of an HTTP proxy, it could insert methods on additional streams which were known to be part of the WWW session. Once all methods for a key have been inserted by the various lter-insertion methods, these methods are placed in order. The current method for selecting an order involves a simple priority

CHAPTER 5. SERVICE PROXY

33

mechanism. Each lter is created with a priority. High-priority lters have their methods placed at the beginning of the in queue and the end of the out queue. This allows them to override the changes of lower-priority lters before the packet is reinserted onto the network. This prioritybased ordering works well when all lters are created at the same time and all side e ects of other lters are well known. Priorities of lters can then be chosen such that lters which rely on the changes of another lter can be given higher priority. In the future, priority mechanisms will need to include speci cation comparison and con ict-resolution methods to handle lters not created together as a base set of well-understood services. Once the lter queue is created, or if a lter queue already exists on its key, arriving packets are presented to the rst in method for its key. This corresponds to the highest-priority lter, or the top method in Figure 5.2. Once the packet has been read going down the in lter queue, being inspected by lters with successively lower priority, it is presented to the lowest-priority lter method in the out queue. It is then modi ed by lters with higher and higher priority until it once again reaches the \top" of the queue and is reinjected into the network. Filter accounting is a side-e ect of both packet detection and lter management. Whenever new streams are discovered and lters instantiated to service them, statistics are compiled internally. This information can be obtained using a special connection to the SP and is currently used only by Kati to display stream information to interested users. This interface is described in the following section.

5.3 Service-Proxy Interface The interface is a command-line interface accessed via a telnet session to a port (12000) on the SP machine. Once connected, the SP can be controlled using the commands described in the following section.

CHAPTER 5. SERVICE PROXY

34

5.3.1 Command Summary The following commands are available via the telnet interface. Commands give no feedback unless otherwise speci ed (fail-silent).

 load

Attempts to load the speci ed Filter Library File. If successful will print the name of the lter that was registered. (Use this name for the \add" command)

 remove

Attempts to unload the speci ed Filter Library File.

 add

Adds the speci ed lter onto the speci ed key. The key may be a wild-card key. The args is whatever string follows the key speci cation and is passed as an array of strings to the lter's insertion method when it is instantiated. The args may be optional or required depending on the lter type which is to be added.

 delete Deletes the speci ed lter for the speci ed key.

 report []

Reports on the what stream keys are being serviced by lter . If is not speci ed, all lters and their associated stream keys are listed.

5.3.2 Interface Example The following example shows a sample session with a user on the host styx connected via port 12000 to the SP running on the host eramosa. (See Figure 5.3.) In this example, after connecting to the SP interface on eramosa (129.97.40.42) the user rst issues a report command (line 6) and determines that there are currently four lters loaded, and two keys active. The tcp lter watches TCP streams, recalculating IP checksums as necessary and deleting all lters associated with TCP streams when the stream closes. It is currently servicing a

CHAPTER 5. SERVICE PROXY

35

single stream 11.11.10.99 7 -> 11.11.10.10 1169. Note that the two hosts in this connection, 11.11.10.99 and 11.11.10.10 are being simulated on eramosa. The launcher lter runs on wild-card keys and adds lters to new streams which match its wild-card key. As can be seen on lines 9-10, it is watching 11.11.10.10 0 -> 0.0.0.0 0. It is currently applying tcp and wsize lters on matching streams. Before this example, the stream 11.11.10.99 7 -> 11.11.10.10 1169 was detected and the two lters applied. The wsize lter alters the TCP window size (see Section 8.2.2 for a description) and is also servicing the only real stream 11.11.10.99 7 -> 11.11.10.10 1169. The rdrop lter is currently loaded but is not applied to any streams. It is a transparency-support lter (see Section 8.1) that randomly drops packets with a given frequency. The user decides to remove the wsize lter and instead use an rdrop lter with a drop rate of 50%. Line 15 shows a well-formed add command for the rdrop lter, where 50 is the additional parameter. Note that the following report command (line 17) shows that the lter has in fact been loaded (see line 25). The delete command on line 27 is successful as the wsize lter no longer has any associated streams (line 34). The lters described above are all direction-insensitive and have the following priorities: launcher - HIGHEST, tcp - HIGH, rdrop - LOW, wsize - LOWEST. Thus. when the report command at line 17 was given, packets on the stream 11.11.10.99 7 -> 11.11.10.10 1169 would rst be inspected by the tcp lter, then the rdrop and wsize lters. The packet would then be modi ed by the wsize lter, followed respectively by the rdrop and tcp lters. This ordering prevents the tcp lter from calculating the IP checksum before all changes to the packet are made and allows the rdrop lter to drop packets without regard to the changes made by the wsize lter. This chapter has described the issues, design and interface of the Service Proxy used in Comma. The following chapter follows the same format to explain the Comma Execution-Environment Monitor.

CHAPTER 5. SERVICE PROXY

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40

styx:~> telnet eramosa 12000 Trying 129.97.40.42... Connected to eramosa.uwaterloo.ca. Escape character is '^]'. report tcp 11.11.10.99 7 -> 11.11.10.10 1169 launcher 11.11.10.10 0 -> 0.0.0.0 0 wsize 11.11.10.99 7 -> 11.11.10.10 1169 rdrop add rdrop 11.11.10.99 7 11.11.10.10 1169 50 report tcp 11.11.10.99 launcher 11.11.10.10 wsize 11.11.10.99 rdrop 11.11.10.10

7 -> 11.11.10.10 1169 0 -> 0.0.0.0 0 7 -> 11.11.10.10 1169 1169 -> 11.11.10.99 7

delete wsize 11.11.10.99 7 11.11.10.10 1169 report tcp 11.11.10.99 7 -> 11.11.10.10 1169 launcher 11.11.10.10 0 -> 0.0.0.0 0 wsize rdrop 11.11.10.10 1169 -> 11.11.10.99 7 ^] telnet> quit Connection closed.

Figure 5.3: SP Interface Example

36

Chapter 6

Network Monitor It is widely believed that the application-level solution to variable network QoS is to make applications adaptive to changes in the underlying network. Applications could then alter their operation to reduce communication in times of low bandwidth. This allows the application to continue operating, though the the user might perceive inferior service from the application at that time. This idea can be extended to data streams as well. If communication streams could be shaped to the available QoS without compromising the operation of the distributed application, varying QoS could be handled by the use of a proxy mechanism. Filters can then prioritize information to be sent to the mobile so that in times of low QoS, minimal operation can continue and regular operation resume in periods of high QoS. Such services are presented in Chapter 8. In order to support such adaptability, it is necessary to obtain accurate information about the state of the network. The Comma Execution Environment Monitor (EEM) allows clients ( lters or distributed applications) to register interest in one or more metrics from one or more EEM servers. EEM clients run as application threads that communicate with EEM servers on hosts in which the application has registered an interest. EEM servers can run on any networked host, and gather local network and machine statistics. The EEM server has been designed so that it can access a wide and easily extensible variety of information sources on its local host. This 37

CHAPTER 6. NETWORK MONITOR

38

allows application designers to extend the EEM model so that clients can monitor environment conditions of speci c interest to them.

6.1 Issues Network monitoringhas two main components: a data-gathering component, and a data-dissemination component. The data-gathering component either polls system metrics, or connects with other components to query their knowledge bases. In order to pass this information on to interested applications, some method of communicating that information is required. The following areas of concern have led to the design of the existing EEM.

6.1.1 Data Sources In order to e ectively characterize the state of the network, a wide variety of environment measures or metrics must be available to the application. There is still much debate on how to characterize good and bad network performance. Since this is the case, it was decided not to limit the design of the EEM to a single set of metrics, but to use a more modular approach where new measures could be added to the monitor at a later date.

6.1.2 Generated Trac An area of concern for network monitors is the amount of trac produced by client updates. In resource-poor environments, such as wireless networks, the use of resources should be minimized. In order to reduce network utilization, such as that caused by the individual message-permetric overhead of polling, we have centralized all data gathering on servers which monitor their own local environment. Monitor servers have been made as portable as possible so that they can be placed on any host on which a network data source, such as SNMP [5], exists. Monitor clients connect with remote servers indicating what metrics interest them and at what point they wish to be informed. The client will only receive messages from the server of the metrics which meet those criteria, at the time speci ed by the client: immediately for interrupt-style noti cations, or

CHAPTER 6. NETWORK MONITOR

39

in a certain amount of time for periodic updates. Combined with a lean data-transfer protocol between client and server, the trac generated is greatly reduced for monitor updates.

6.1.3 Noti cation Method One of the most important questions for a monitor designer is when and how the client should be noti ed about the state of the network. The three main options are: an interrupt approach, where the client is noti ed immediately about changes; a periodic approach, where the client is noti ed of changes at regular intervals; and allowing the client to poll the information sources itself. When a client wishes to be noti ed of changes in its execution environment, it must rst indicate which metrics it is interested in and what values of the metric cause noti cation. The advantage of the interrupt-noti cation approach is the speed and nature of the information arrival. Since the message about the state of the network acts as an interrupt to the regular operation of the application or lter, important changes in the state of the network will be noticed and handled early. The drawback comes from the complexity of programming for such changes. A handling routine must be created for the metrics and the associated program must be able to handle one or more interrupts. Periodic client noti cations allow for much less intrusive updates. Periodic noti cation can be done in the background and it is left to the program to decide when to look at the local copy of the current network metrics. This leads to a much less complicated program, but important changes may be missed until the program explicitly checks the stored values. A more active approach is also available where the client queries the information sources directly. This method has the advantage that queries of the data source are made only when needed by the client. However, there are several disadvantages. Where more passive approaches can hide the di erences in query methods of di erent data sources, the polling client must make all such requests itself. Communication overhead is also greatly increased since di erent metrics must be queried separately, where both periodic and interrupt-style updates can include all related information in a single message. Also, update-style messages will include only such variables as

CHAPTER 6. NETWORK MONITOR

40

have changed, reducing overhead further. A nal consideration involves the synchronous nature of polling. Unless a more complex threaded communication style is used by the client, polling leads to pauses of execution while server requests are processed. This in unacceptable for real-time operation of clients such as lters. A mixed approach was decided on for the EEM, where all three types of noti cation would be available. This has led to a complex server model, but the client now has the option of choosing which method or combination of the three methods is most appropriate for its operation. When the client initiates a monitor session, it can request interrupt-style as well as periodic noti cation. Functions are also available to poll the EEM server directly about individual variable.

6.2 Monitor Design The Comma Execution-Environment Monitor (EEM) is a network- and computing-environment report tool. EEM servers run on suitable hosts and gather information on local performance metrics for local or remote clients. The EEM is con gurable so that it can gather information from any local information source, including user-written ones. The EEM design has four main parts: the client functional library, which presents an abstraction of the services o ered by the EEM; the server process, which accepts and services requests from the clients; a client-supplied callback function, which is combined with an exception handler for interrupt-style noti cations; and a protected data area, which is used for periodic updates. The architecture is shown in Figure 6.1. To use the EEM, clients, which may be applications or SP lters, call an initialization function specifying the address of a callback function if interrupt-style noti cation is desired. Initialization also clears the protected data area and starts a second thread to handle communication with EEM servers. The client can then register an interest in network- and execution-environment metrics or variables. The actual variables available at any EEM server will depend on the particular host, but it is expected that at least the SNMP variables will be available. It is hoped that, eventually, a

CHAPTER 6. NETWORK MONITOR

41

Client Main Thread

Server Connection Thread Network Manager

Protected data area Application Exception handler

Client Info block

Registration Service

comma_init(&handler) comma_register(id,signature)

Figure 6.1: The Execution Environment Monitor (EEM) Architecture standard set of metrics will be provided. However, the de nition of a set of measures appropriate to all applications is beyond the scope of this thesis. To register interest in a variable, the client rst creates a variable ID consisting of the variable name and the host on which the variable will be measured. This is accompanied by a \signature", consisting of a range within which values of the variable must fall for noti cation to occur and a method of noti cation. Applications may be noti ed in one of two ways when an EEM nds that a registered variable falls within its requested range. The rst is an interrupt-driven callback. If a noti cation arrives for a variable for which interrupt noti cation was requested, the exception handler immediately calls the callback function provided by the client on initialization. It is then up to the developer to handle the informationpassed to the function. The second and less intrusive method of noti cation involves periodic silent updates to a protected data store. The application can query the data store to determine whether a variable has changed or what the most up-to-date value is. Whenever a client registers for a variable on an EEM server not already connected to the client, the connection thread opens a connection to the new host, sends the new variable registration information, and then receive-blocks until it receives an update from the server. When information is received on this connection, the message is parsed by the exception handler and either a call to

CHAPTER 6. NETWORK MONITOR

42

the callback function is made or the common data area is updated. The server initially waits for registrations from clients. Whenever it receives a request, it updates its database, taking note of the requesting host and port number. The server then makes periodic checks of the variables registered by all clients and compares them to the conditions under which each client asked to be informed. If an interrupt-style variable has changed into the desired range, a noti cation message to the appropriate client is sent immediately. Otherwise, an update containing all variables that fall within their requested range is sent to the appropriate client once all variables have been checked. Polling is also supported by allowing for temporary registrations which are immediately removed after the requested metric has been retrieved and sent back to the client. This simple and extensible approach provides applications with the network and executionenvironment metrics necessary for adaptation.

6.3 EEM Interface This section describes the variables and interface functions available to EEM clients. A list of the server variables currently available is given, most of which are retrieved from local SNMP servers. The functional interface to the EEM is described in some detail since it was created as part of this thesis for use with the Kati shell. Finally, a brief test program is described which uses the EEM interface.

6.3.1 EEM Variables The EEM server uses SNMP [5] as its main data source (see Table 6.1), but several other variables are o ered. These variables were found to be of use in earlier applications (see Table 6.2). These variables are divided into three basic data types: integer, double, and string. In order to deal with variables a union type was created called comma type t. The function comma id gettype returns the type of the variable speci ed in the comma id t as one of LONG, DOUBLE, or STRING.

CHAPTER 6. NETWORK MONITOR

sysDescr sysObjectID sysUpTime sysContact sysName sysLocation sysServices

ipInReceives ipInHdrErrors ipInAddrErrors ipForwDatagrams ipInUnknownProtos ipInDiscards ipInDelivers ipOutRequests udpInDatagrams ipOutDiscards udpNoPorts ipOutNoRoutes udpInErrors ipRoutingDiscard tcpRtoAlgorithm tcpRtoMax tcpActiveOpens tcpAttemptFails tcpCurrEstab tcpOutSegs

tcpRtoMin tcpMaxConn tcpPassiveOpens tcpEstabResets tcpInSegs tcpRetransSegs

43

ifNumbers ifIndex ifDescr ifType ifMtu ifSpeed ifInOctets ifInUcastPkts ifInNUcastPkts ifInDiscards ifInErrors ifInUnknownProtos ifOutOctets ifOutUcastPkts ifOutNUcastPkts ifOutDiscards ifOutErrors ifOutQLen

Table 6.1: SNMP Variables Supported by the EEM

variable

netLatency avgInIPPkts cpuLoadAvg ethErrsAvg ethInAvg ethOutAvg deviceList bytes rx bytes tx

description

measure of the network latency from ping RTTs to the default router average of incoming IP packets uni- or broadcast (from SNMP history) cpuload average, as recorded by the local kernel number of errors in ethernet frames received by host number of incoming ethernet frames received by host number of outgoing ethernet frames sent by host string that lists the devices con gured on host bytes received by the network device driver bytes transmitted by the network device driver Table 6.2: Additional EEM Variables

CHAPTER 6. NETWORK MONITOR command

comma init:

44

description

initialize comma structures & connect with the local server. comma term: free all local structures & disconnect from all servers currently in use. comma setcallback: sets default callback function for interruptstyle callback noti cation Table 6.3: EEM Initialization and Termination Functions

6.3.2 EEM-Interface Functions A client interface was developed for this thesis to be used by applications, SP lters, and the Kati shell. This interface was designed to give access to EEM server variables in a straightforward manner, with minimal overhead. All interface functions begin with \comma ", followed by the function they support. This scheme was used to identify variables related to Comma. In order to receive environment metrics from the EEM, the application must rst initialize its interface, and then create some variable speci cations to register. This is done by lling in two complementary data structures. The comma id t structure identi es the variable type and EEM server from which to receive the value. The comma attr t speci es when the noti cation is to take place. It gives the noti cation region and the evaluation criteria to determine if the variable is currently within the bounds of interest. Once these two structures have been lled in, they are registered via the comma register() function. Updates will then arrive at the client, either through callbacks to the speci ed callback function or silently in the protected data area (PDA). Variables stored in the PDA can be accessed through comma query functions. The client can use query functions to retrieve values using the comma id t values used for the registration of that variable. These EEM functions are brie y summarized in Tables 6.3 to 6.7. Applications must rst initialize internal data and other accounting structures hidden from the application. The comma init function must be called before any other EEM-related functions. All server connections are closed and data structures freed by a call to comma term. Currently, each client has the option of using the periodic-update method with or without callback noti cation. If the comma setcallback function is called, all variables registered will be supplied to both the

CHAPTER 6. NETWORK MONITOR command

comma id init: comma id setnum: comma id setbyname: comma id setindex: comma id setall: comma id setserver: comma id isindexreqd: comma id gettype: comma id getname:

45

description

initializes id data structure sets id number of passed id sets id number of passed id given var name sets id index of passed id sets id number and index of passed id sets id server to given server checks if given id requires an index value returns the data type of the given id returns the char* name of the given id

Table 6.4: EEM ID Functions given callback function and the PDA. If no callback function is speci ed, only the periodic-update method will be used. These functions are summarized in Table 6.3. In order to register a variable with a possibly remote EEM server, two data structures must be lled in using the given functions. The rst structure, the comma id t type, speci es the variable type and EEM server location. This variable is used in the future for retrieving data stored in the PDA. Comma id init clears the variable id, while comma id setnum, comma id setbyname, comma id setindex, and comma id setall can be used to specify the variable type. The comma id setserver command can be used to retrieve a variable from a remote EEM server. The comma id isindexreqd function returns true if an additional index value is required for a variable, while the comma id gettype and comma id getname functions return the type and name of the variable respectively. These functions are summarized in Table 6.4. In order to specify the noti cation parameters of a variable, an additional disposable data structure must be lled in. The comma attr init function clears the comma attr t structure. The command

comma attr init: comma attr setlbound: comma attr setubound: comma attr setoperator:

description

(re)initializes attribute data structure sets lower bound for attr sets upper bound for attr speci es how bounds are interpreted

Table 6.5: EEM Attribute Functions

CHAPTER 6. NETWORK MONITOR command

comma var register:

46

description

given the id and attributes, registers with the desired server for the particular variable. comma var deregister: given an id, de-registers that variable from the appropriate server. comma var deregisterall: all current registrations with all servers are de-registered (as above) Table 6.6: EEM Register Functions and comma attr setubound specify the bounds of the region of interest, while the comma attr setoperator speci es how these bounds are to be interpreted. These functions are summarized in Table 6.5. Available unary operators are: COMMA GT, COMMA GTE, COMMA LT, COMMA LTE, COMMA EQ, COMMA NEQ, and available binary operators are: COMMA IN, COMMA OUT where GT = greater than, LT = less than, E/EQ = equal, and IN/OUT specify inside and outside the bounds given. For unary operators, only the lower bound is used. Binary operators require both the lower and upper bounds be speci ed. Note that type checking is done for string values so that only COMMA EQ, and COMMA NEQ are valid operators. Once these two variable-description data structures have been speci ed, the variable can then be registered. The comma var register function connects to a new server (if required) and makes the registration. Variables will then arrive at the client at a currently hard-coded interval of roughly ten-seconds. Variables may be removed individually by using comma var deregister comma attr setlbound

command

comma query getvalue:

description

given an id, returns most recent value from the relevant server. comma query isinrange: given an id, reports if most recent value from relevant server was in requested range comma query haschanged: given an id, reports if most recent value from relevant server has changed since value last retrieved comma query getvalue once: given an id and attribute, retrieves value from server. Table 6.7: EEM Query Functions

CHAPTER 6. NETWORK MONITOR

47

which will de-register the variable with the given id; comma var deregisterall de-registers all variables currently in use. These functions are summarized in Table 6.6. If the application wishes to gain information from the data area, the comma query functions give a number of options for accessing the client's read-only store. Comma query getvalue simply returns the most recent value of the variable of the given id. The comma query isinrange function returns true if the variable is within the range of interest and comma query haschanged returns true if the variable has changed since it was last read. If the application wishes only to query the value of a single variable once, the comma query getvalue once returns the current value as soon as the EEM server returns a reply. Note that this is a synchronous call which allows polling of an EEM server. These functions are summarized in Table 6.7.

6.3.3 Interface Example In order to illustrate the operation of the EEM and the use of the client interface, a sample program is given in Figure 6.2. This program begins by installing a signal for terminating the comma client (line 16). It then initializes its interface (lines 18-22). It then lls in a variable attribute structure so that the interval of interest is the interior of [0,20] (lines 28-40). The program then lls in an id structure indicating that the variable of interest is SYS UPTIME (lines 46-52). Since no comma id setserver call was made, the variable retrieved will be for the local host. The two structures are then registered (lines 58-65). Following this, the PDA is polled at ten second intervals for two minutes to see if the variable has changed. When the value changes, the new value is printed to the screen (lines 71-81). This simple program could be used to check if or when a computer crashes during some distributed operation. This chapter has explored the issues, design and interface of the Comma EEM. The following chapter completes the explanation of the support architecture used in this thesis with a discussion of the Kati shell. Kati provides a interface to the operation of both the SP and EEM to allow external control of the lters running on the SP, where previously it was the responsibility of the application to control their own lters.

CHAPTER 6. NETWORK MONITOR

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41

/* * Sample EEM Client code */ comma_id_t id; comma_attr_t attr; int lbound, ubound; comma_type_t new_value; int rc; /* * Do the initialization... */ signal(SIGINT, (void*)comma_term); rc = comma_init(); if( rc != COMMA_OK ) { comma_perror( "comma_init" ); exit( 1 ); } /* * Fill in the attributes */ rc = comma_attr_init( &attr ); if( rc != COMMA_OK ) { comma_perror("attr_init"); exit( 1 ); } lbound = 0; comma_attr_setlbound( &attr, &lbound, sizeof( lbound ) ); ubound = 20; comma_attr_setubound( &attr, &ubound, sizeof( ubound ) ); comma_attr_setoperator( &attr, COMMA_IN );

48

42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82

/* * ...and the ID */ rc = comma_id_init( &id ); rc = comma_id_setall( &id, COMMA_SYSUPTIME, 0 ); if( rc != COMMA_OK ) { comma_perror( "setserver" ); } /* * Register the variable */ rc = comma_var_register( &id, &attr ); if( rc != COMMA_OK ) { comma_perror( "var_register" ); exit( 1 ); } else { printf("main: register OK\n"); } /* * Continually read from static store */ for(int i=0;i