Self : A Data-Flow Oriented Component Framework for Pervasive ...

1 downloads 0 Views 96KB Size Report
objects to connected components. Since the connections between components represent data-flow, we call SelfÂ¥ a data-flow oriented framework. However, as ...
Self : A Data-Flow Oriented Component Framework for Pervasive Dependability ¨ Christof F ETZER, Karin H OGSTEDT AT&T Labs - Research 180 Park Avenue, Florham Park, NJ 07932 christof, karin  @research.att.com  Abstract

or even thousands of computing nodes working together. So far, such systems have been economically infeasible to build. This will change however. During the last four decades, we have witnessed an exponential increase in processing power, disk capacity and network bandwidth. One expects this exponential increase to continue for at least another decade. The underlying exponential increase in circuit density (described by “Moore’s Law” [17]) will make pervasive systems economically feasible to build. Pervasive systems will become more and more critical in today’s society as they become an integral part of the infrastructure upon which our society depends. While the unavailability of a few such systems might be a mere inconvenience, the outage of a large number of systems might have broad economical consequences. We call a system with this kind of criticality a society critical system. Because pervasive systems will be society critical, it is crucial that they are designed to be dependable. While most of the past and current work in pervasive systems has been focusing on the human/computer interface, our work instead focuses on these dependability issues, i.e., reliability, availability, security, and manageability issues of pervasive systems. Although it is recognized that the dependability of pervasive systems is an important issue [5], not much research has been done in this area. A notable exception is [22] that describes a dependable infrastructure for home automation. That work focuses mainly on a soft-state approach for home automation and many questions remain open. Pervasive systems have to become more dependable before they become deployable on a larger scale. In particular, pervasive systems will not be widely deployed if they require users to invest substantial amount of time or money to keep them operating. Our goal is to find dependability mechanisms and policies that (1) maximize the dependability of pervasive systems, while (2) minimizing the operating cost (including deployment and maintenance cost). Informally, we use the term pervasive dependability to refer to

Both the scale and the reach of computer systems and embedded devices have been expanding constantly over the last few decades. As computer systems become pervasive, their criticality increases because they become an even more integral part of the infrastructure upon which our society depends. Hence, pervasive systems are likely to become “society critical” and their dependable operation must be ensured. Since the number of computing nodes in a pervasive system will be much greater than in traditional dependable systems, the operating cost per node needs to become much lower. Therefore, more cost effective dependability mechanisms need to be deployed to achieve the desired degree of dependability at a given maximum operating cost. In this paper, we explain the notion of pervasive dependability and outline some of the challenges we face in achieving pervasive dependability. We then describe our component based data-flow oriented middleware that we use as a test bed to investigate how to address these challenges.

1 Introduction A pervasive system consists of a large set of networked devices embedded, seemingly invisible, in the environment. Pervasive systems research started in the late 1980s at Xerox PARC [23], and variety of application domains have been proposed for pervasive systems, e.g., education [1] public spaces [9], health care [21], and home control systems [13, 6, 18]. Currently, pervasive systems are only deployed to a limited extent however. This is because of the complexity of a pervasive system – each system might contain hundreds,

 This paper appears in the Proceedings of the Eighth IEEE International

Workshop on Object-oriented Real-time Dependable Systems (WORDS 2003), January 15-17, 2002 Guadalajara, Mexico.

1

these two requirements. Without achieving pervasive dependability it is unlikely that pervasive systems will become widely used in our homes and throughout society. However, if we can achieve pervasive dependability, they might become society critical, i.e., pervasive systems will be critical as a group. It is not important if one such system, e.g., one home network, goes down, but if many fail, our society will be adversely impacted. It is therefore just as important to make society critical systems dependable as it is safety and mission critical systems. There is however a difference in the mechanisms needed to achieve this dependability. Due to the traditional focus of dependability research on mission and safety critical systems, guaranteeing correctness has been of utmost importance, while the minimization of operating costs has been a secondary issue. In comparison, most pervasive systems will have relaxed correctness requirements, whereas, due to the sheer number of devices in a pervasive system, the cost per node has to be kept to a minimum. It is therefore likely that new or different dependability mechanisms are more appropriate to achieve pervasive dependability. The most fundamental challenge in making pervasive systems dependable is complexity. The very large number of devices and software components and the requirement to facilitate cooperation between all devices results in very complex systems. Furthermore, there will be a large number of pervasive systems all of which are different from each other. For example, consider the context of pervasive home systems. If sensors and actuators become sufficiently inexpensive, future homes might deploy pervasive system consisting of hundreds or even thousands of devices. One has to expect that each home will use a different set of devices and software components. Managing such systems will be a challenge: diagnosing why something does not work in a complex system is non-trivial but will frequently be needed because the failure frequency increases with the number of devices and software components. The complexity of pervasive systems has many aspects. Marc Weiser articulated some of these aspects as follows [24]:

In the rest of this paper we outline our approach to achieve pervasive dependability (Section 2) and present the architecture of our component-based middleware system, Self (Section 3). We present related work in Section 4 and conclude in Section 5.

2 Approach Our approach in achieving pervasive dependability is to harness the power of both Moore’s Law and the scientific method. The idea is to keep as much run-time data as possible (using the exponentially increasing disk sizes), perform automated data mining (using the exponentially increasing network speeds to collect distributed data and the exponentially increasing processor speeds to process the data) and automated fault-injection experiments (using the exponentially increasing processor speeds) to automatically increase the dependability of pervasive systems without the need of human intervention. Automation is achieved by “mechanizing the scientific method”. The idea is that based on the properties of the software components, the system automatically selects appropriate dependability mechanisms. To derive properties of software components, the system tests a set of hypotheses (e.g., function  crashes if its first argument is negative) using automated fault-injection experiments to reject invalid hypotheses. Based on the non-rejected hypotheses, the system selects a set of appropriate dependability mechanisms (e.g., creates a wrapper for  that checks that the first argument of  is not negative). In the scientific method, one rejects incorrect hypotheses by thorough experimentation and one accepts the hypothesis that explains the experimentation results. A previously accepted hypothesis might be rejected and replaced by a more precise hypothesis at some later point if more experimental data becomes available. In an automated system, a previously accepted hypothesis might be rejected due to logged field data. If the optimality (or correctness) of a dependability mechanism depends on the validity of a hypothesis, one cannot guarantee optimality (or correctness) since new data can lead to rejection of the hypothesis. Due to the expected low criticality of an individual pervasive system, we believe the potential benefits (e.g., higher availability and lower operating cost) outweigh the potential risks (e.g., incorrect failure masking). We are investigating this idea in the context of pervasive home systems. Initial results demonstrate that one can increase the dependability of C libraries automatically using automated fault-injection experiments [8]. We have been designing and implementing a new data-flow oriented middleware framework, called Self . The goals of this platform are to (1) permit us to perform automated fault-injection and data collection of application software components and the

“If the computational system is invisible as well as extensive, it becomes hard to know what is controlling what, what is connected to what, where information is flowing, how it is being used, what is broken (...), and what are the consequences of any given action (...).” His statement is valid both at the human interface level and at the software level. In our approach to achieve pervasive dependability, we attempt to address the issue of complexity in a systematic way.

2

middleware platform itself, and (2) help us to deal with the complexity of pervasive systems. To test and evolve our ideas about how to achieve pervasive dependability, we decided to design and implement a data-flow oriented system. An application is designed and implemented by a set of components connected by explicit links. Communication of data (and control) is restricted to these links. This facilitates the system to keep track of all data and control flow. Logging the data flowing via the links permits the system to track how components are being used and to determine what is broken. Using fault-injection, the system can also determine the consequences of faults and this might permit the system to select appropriate fault detection and masking mechanisms. In the next section we give a more detailed overview of the architecture of the Self system.

3 Self

along known connections, one can use this separation of components to reconfigure a system in case of unmasked failures. For example, in some applications a graceful degradation is possible by removing failed or disconnected components from the component graph. In Figure 1 it might be permissible to remove component  by connecting pin

of component to pin  of component  .

3.1 Component Graphs A component graph consists of a set of components connected together through their pins. Pins are unidirectional: a pin is either an input or an output pin. Hence, component graphs are directed. Furthermore, pins can only be connected pairwise. If more than two pins need to be connected to each other, “merger” or “splitter” components must be inserted as needed. For example, if the output of one component is used by two other components, a “splitter” with one input pin and two output pins would be inserted (see component in Figure 1). Pins are untyped in the sense that any type of data object can be sent via a pin. When connecting two pins one only has to make sure that the data direction matches, i.e., an output pin must be connected to an input pin or vice versa (see Sections 3.3 and 3.4 for a detailed explanation). Of course, the potential disadvantage of not having a type system is the introduction of avoidable run-time errors. However, we believe this is outweighed by the enabling of a very flexible reuse of components unconstrained by potential restrictions imposed by a type system. Components can be hierarchically designed and implemented by the use of other components. Once an application designer has designed a component graph, this component graph can be used as a component in another component graph. We call a component that is implemented by a component graph, a super-component. For example, Figure 2 shows a super-component  that is implemented by a component graph consisting of four components.

Architecture

Self is a data-flow oriented, component-based middleware framework. When using Self , an application is designed by connecting components together, forming a component graph. Components communicate by passing data objects to connected components. Since the connections between components represent data-flow, we call Self a data-flow oriented framework. However, as we explain below, this framework also supports several kinds of controlflow patterns. The components are connected to each other through their pins. A component can have any number of pins, created either statically or dynamically, but a component can only communicate with other components via its pins. In other words, the pins constitute a component’s only communication interface. See Figure 1 for a sample component graph. Component C

input pin

P1

output P3 pin

P2

P1

Splitter P2 P1

P3 Comp. A Comp. D

P3

Comp. C

P2 Component A

Component B

Component D

Comp. B

Figure 1. A sample component graph. The components are shown as rectangles and the pins as circles with an enclosed triangle indicating the direction of the data-flow.

Super−component S

Figure 2. Super-component  consists of components  , ,  , and  . Pins  ,  , and , implemented by component  , , and  , respectively, are externally visible.

A data-flow oriented approach has the advantage of a clear separation between components. Because all data flow

Users can also write their own components completely 3

from scratch, so called elementary components. Self provides a library that contains building blocks such as standard pins (see Figure 5) and component templates that simplify the implementation of elementary components, i.e., non super-components. A component graph can be built by connecting standard components provided by the Self library (see Section 3.5), previously created super-components, and/or elementary components implemented by the users of Self . Self supports blocking and non-blocking communication and push- and pull-oriented communication. We discuss the programming language level primitives that facilitate inter-component communication next.

3.2 Data-Flow

Component A Slave output pin

Control−flow

Data−flow

Component B Master input pin

(a) Pull

Component A Master output pin

Control−flow

Data−flow

Component B Slave input pin

(b) Push

A component can receive an object by calling one of the following two methods on an input pin: get wait() or get poll(). A component can send an object by calling method insert wait() or insert poll() on an output pin. We call these four methods pin methods. The methods get wait() and insert wait() are blocking calls and do not return until an object has been successfully transfered. By “successfully transfered” we mean that the receiving component has taken ownership of the object. In particular, get wait() only returns when it has received a data object from the connected component, and insert wait() returns when the data object has been successfully transfered to the receiving component. If the connection or the connected component fails, the pin throws an exception to notify the component that an error has occurred. The methods get poll() and insert poll() are non-blocking versions of get wait() and insert wait (), respectively. They return an error code if the object was not successfully transfered.

Figure 3. Two components connected to each other using (a) a slave output pin and a master input pin, and (b) a master output pin and a slave input pin. In (a) component B initiates the control flow by calling get wait() (get poll()) on its input pin, and in (b) it is initiated by component A calling insert wait() (insert poll()) on its output pin.

to the slave pin on component A. A’s slave pin then performs the actual method call, gets the data object from component A, and returns it to component B via B’s master input pin. On the other hand, if component B’s input pin is a slave pin and component A’s output pin is a master pin (see Figure 3(b)), component A initiates the control flow by calling insert wait() or insert poll() on its output pin. The master output pin then forwards this call to the slave input pin, which inserts the data object into component B. As we mentioned above, the reason why we distinguish between master and slave pins is to allow both a push- (master output pin connected to slave input pin) and a pull- (slave output pin connected to master input pin) communication mechanism between the components. Note that in pushoriented communication the data-flow and control-flow have the same direction, while in pull-oriented communication the data-flow and control-flow have opposite directions. Distinguishing between master and slave pins permits the system to enforce that incompatible pins are not connected directly with each other. It is not possible to connect two master pins or two slave pins directly with each other. However, if the application designer wishes to connect two such components, Self automatically inserts an adaptor with the appropriate behavior between the two components. Adaptor components are provided by the Self library, and allow users who are not interested in fine tuning the per-

3.3 Control-Flow For a more convenient programming of components, Self supports both a push- and a pull-oriented style of communication. Pins are classified according to the direction of the data-flow as either input or output pins, and they are also classified according to the direction of the control-flow as either master pins or slave pins. A master pin can only be connected to a slave pin or vice versa. The pin methods are called on the master pins by the component the pin belongs to, whereas the methods are called on the slave pins by the master pin they are connected to. In other words, the master pins are the callers and the slave pins are the callees. For example, consider the two components depicted in Figure 3(a). Component B calls get wait() or get poll() on its master input pin, and this call is forwarded 4

formance to design applications without the knowledge of the difference between master and slave pins. We describe adaptors in more detail in Section 3.5.4.

therefore choose either to ignore the event or to act upon it by calling a get wait() or get poll() method on its input pin. Similarly, in Figure 4(b) component B can notify component A of the fact that it is ready to process a new data object. Since component A is the one that initiates the control flow it may, or may not, choose to ignore this event. The event notification is always initiated by the component to which the slave pin belongs, and then forwarded to the connected component via the two pins. More specifically, in Figure 4(a) component A notifies its output pin, which notifies component B’s input pin, which notifies component B. In Figure 4(b) the event-flow goes in the opposite direction. Notice that the direction of the event-flow is always the opposite of the direction of the control flow. A master event pin can only be connected to a slave event pin. If an application designer connects a master event pin to a non-event pin, Self automatically inserts an adaptor. However, slave event pins can safely be connected to master pins without event capabilities – these pins must by convention drop all incoming events. The eight different kinds of pins provided by the Self library are summarized in Figure 5. These pins are used in the implementations of the standard components. However, these standard pins can also be used by component designers when creating new elementary components.

3.4 Event-Flow Finally, we also distinguish between pins that can generate/process events and those that can not. We call the former kind of pins event pins. Event pins can be used to let the connected component know when there is either new data or space available at the pin’s component.

Component A Slave event output pin

Control−flow

Component B Master event input pin

Data−flow Event (a) Event-triggered pull

Component A Master event output pin

Control−flow

Component B

Data−flow

Slave event input pin

3.5 Standard Components

Event

As a basic building block for user-defined components, we provide a base class which the component designers can extend to create their own components. However, one goal of the Self component framework is reusability, and we therefore also provide a set of general components that the application designers can use. Among these standard components are: queues, event-handlers, filters and adaptors. We now describe these in more detail.

(b) Event-triggered push

Figure 4. Two components connected to each other using (a) a slave event output pin and a master event input pin, and (b) a master event output pin and a slave event input pin. In (a) component A notifies component B that it has data available by letting its output pin send an event to component B’s input pin. In (b) component B notifies component A that it has space available by letting its input pin send an event to component A’s output pin. The notified components can then choose to either ignore or take advantage of this information.

3.5.1 Standard Queue The purpose of the standard queue component is to act as a buffer – when it receives a data object on its input pin it stores it, and when it gets a request for a data object, it removes one object from its buffer in FIFO order. The queue has one slave event input pin and one slave event output pin. The fact that the pins are slave pins means that the control is initiated by the connected components (see Figure 5). Whenever the component connected to its input pin has a new data object available, it calls insert wait() or insert poll() on the queue’s input pin. And when the component connected to the queue’s output pin needs more data, it calls either get wait() or get poll() on the queue’s output pin.

Consider for example the situation in Figure 4(a). Here, since component A’s output pin is a slave event pin, component A can notify component B of the fact that it has a new data object available. (The small arrows inside the triangles of the pins denote the direction of the event-flow.) Since component B’s pin is the master pin, it is component B’s responsibility to initiate the control flow. Component B can 5

Event pins Slave pins

Master pins

Data−flow

Input pins

Data−flow

Output pins

Event−flow Control−flow

Event−flow Control−flow

Figure 5. The eight standard pins that the Self library provides: slave input, slave event input, master output, master event output, slave output, slave event output, master input, and master event input pins. The first four types of pins provide a push communication mechanism since the controlflow and the data-flow direction are the same. The latter four provide a pull mechanism since the control-flow and data-flow go in opposite directions.

receives an event and the sender of the event as its input.

Furthermore, since the queue’s pins are event pins, they also generate events. When the queue is non-full, its input pin sends a “space available”-event to the component it is connected to, and when the queue is non-empty, its output pin sends a “data available”-event to its connected component. These events might be ignored by the connected components or they might cause the connected components to issue the above mentioned method calls.

body

3.5.2 Standard Event-Handler Figure 6. A sample event-handler derived from the standard event-handler component, with two master event input and three master event output pins.

The standard event-handler allows the component designer to create an event-handler by only specifying the number of pins and the body it should have. All other details are taken care of by the standard event-handler. An event-handler created from the standard event-handler contains one thread and any number of pins. In general these pins would all be master pins, since the control most likely will be initiated by the event-handler thread. There is however nothing preventing the component designer from adding slave pins to the design. At least one of the pins will in all likelihood be a master event pin, since the purpose of an event-handler is to react to events. Apart from deciding what kinds of pins the event-handler should have, the component designer also needs to implement the body of the event-handler, i.e., the method which reacts to the events it gets. The body of the event-handler

Let us consider the sample event-handler depicted in Figure 6. It has two master event input pins and three master event output pins. When it is instantiated, its thread is automatically created and waits for an event to occur. When this happens the body is started and the event and its sender are given to the body as input, e.g., “data available on inpin1”. The body then processes the event, maybe by calling get poll() on inpin1 followed by a insert wait() on one of the output pins that it previously has received a “space available”-event from.

6

3.5.3 Standard Filter

they were connected to a different kind of pin. The adaptors also permit high-level application designers that know about data direction – but not necessarily control-flow or event-flow – to use the framework .

The purpose of the standard filter component is to simplify the creation of simple object filters: a component designer only needs to specify a filtering function to create a new filter component. A standard filter has two pins: an input pin and an output pin. The input pin is a slave pin, whereas the filter’s output pin is a master pin. Consider the sample filter depicted in Figure 7. Whenever the component connected to its input pin performs an insert, the filter calls the user-defined filtering function to either transform or drop the incoming data object. If the object is to be transformed, the new object is sent to the next component via the output pin.

4 Related Work Data-flow oriented systems have been investigated in different contexts in the recent years [10, 14, 4, 25, 11]. For example, data-flow oriented systems facilitate adaptation [7, 10, 19], throughput optimization [25], threading optimization [14], and media streaming [4]. Our investigation focuses on the applicability of data-flow oriented systems for pervasive dependability. The major questions that we are addressing in this context are how one performs (1) selfconfiguration, (2) self-tuning, and (3) self-diagnosis. There has been a large body of work addressing the issue of reconfiguration of component-based systems [2, 3, 10, 12, 15, 16, 20]. However, with the possible exceptions of [10, 16], which provide (non-trivial) automatic reconfiguration of pipelined applications, all of these approaches require an amount of user (or system administrator) involvement that is inappropriate for pervasive dependable systems. When addressing this target domain, we need to develop a more general system that requires less user involvement, and that handles arbitrary component graphs and more failure types. So far, only a few authors have addressed the dependability issues of pervasive systems. Wang et al. [22] have addressed the issue of dependability of home networks. Their approach uses a soft-state infrastructure, i.e., the state is periodically refreshed. The advantage of this approach is that erroneous states are corrected (or removed) from the system within a bounded amount of time. The disadvantage is that the staleness and even the correctness of data is application dependent. Different applications might have different requirements for the period after which the state of a device must be updated or removed. For example, a fault-diagnosis application might need the last (possibly erroneous) state of a device long after the device’s state has to be removed from the system. A refresh period for the state of a device that fits all applications might not always exist.

filter

Figure 7. A sample filter derived from the standard filter component.

3.5.4 Standard Adaptors There are two purposes of the adaptors. First, they allow users to design applications without having intimate knowledge of the details of different kinds of pins. Second, they allow components to be designed in the most natural way, without taking into account what they will connect to. After all, this information is not always available a priori. Whenever a user is connecting two pins of incompatible types, e.g., two slave pins or one master event pin with a slave pin without event-capabilities, an adaptor is automatically inserted between the pins. The inserted adaptor will have the correct pins and also the correct functionality. For example, if a user wants to connect a slave output pin with a slave event input pin, the inserted adaptor has one master input pin and one master event output pin. The adaptor is derived from the standard event-handler and its body waits for an event from its output pin. When it gets that, it calls a get wait() on its input pin, followed by an insert poll() on its output pin. There is a total of twelve standard adaptors. They allow users to connect any pair of pins together as long as the pair consists of one input pin and one output pin. The adaptors do not necessarily have to be inserted automatically by the system. They can also be inserted explicitly by the application designer. The disadvantage of the adaptors is a potential performance hit due to added levels of indirection and potentially extra threads. But without adaptors, the component designers would be forced to rewrite their components every time

5 Conclusions In this paper we describe Self which is a new data-flow oriented framework. We have designed and implemented Self to be able to investigate several issues in achieving pervasive dependability. The Self framework is very flexible in permitting application designers to mix and match components which have been designed using different event and control-flow styles (push vs pull, event-oriented vs non-event-oriented). 7

References

[13] S.S. Intille. Designing a home of the future. IEEE Pervasive Computing, 1(2):76 – 82, 2002.

[1] G. D. Abowd. Classroom 2000: An experiment with the instrumentation of a living educational environment. IBM Systems Journal, 38(4):508–530, 1999.

[14] R. Koster, A.P. Black, J. Huang, J. Walpole, and C. Pu. Infopipes for composing distributed information flows. In International Workshop on Multimedia Middleware, 2001.

[2] M. R. Barbacci, C. B. Weinstock, D. L. Doubleday, M. J. Gardner, and R. W. Lichota. Durra: a structure description language for developing distributed applications. IEE Software Engineering Journal, 8(2):83– 94, 1993.

[15] J. Kramer and J. Magee. The evolving philosophers problem: Dynamic change management. IEEE Transactions on Software Engineering, 16(11):1293–1306, 1990.

[3] T. Bloom and M. Day. Reconfiguration and module replacement in argus: theory and practice. Software Engineering Journal, 8(2):102–108, 1993.

[16] V. Martin and K. Schwan. ILI: An adaptive infrastructure for dynamic interactive distributed applications. In Proceedings of the Fourth International Conference on Configurable Distributed Systems, pages 224– 231, 1998.

[4] G. Bond, E. Cheung, A. Forrest, M. Jackson, H. Purdy, C. Ramming, and P. Zave. DFC as the basis for ECLIPSE, an IP communications software platform. In Proceedings of IP Telecom Services Workshop, pages 19–26, 2000 Sept.

[17] Gordon E. Moore. Cramming more components onto integrated circuits. Electronics, 38(8):114 – 117, April 1965.

[5] N. Davies and H.W. Gellersens. Beyond prototypes: Challenges in deploying ubiquitous systems. IEEE Pervasive Computing, 1(2):26–35, 2002.

[18] M. Mozer. The neural network house: An environment that adapts to its inhabitants. In AAAI Spring Symposium on Intelligent Environments, pages 110– 114, 1998.

[6] C. D. Kidd et al. The aware home: A living laboratory for ubiquitous computing research. In 2nd International Workshop on Cooperative Buildings, 1999.

[19] P. Oreizy, M.M. Gorlick, R.N. Taylor, D. Heimhigner, G. Johnson, N. Medvidovic, A. Quilici, D.S. Rosenblum, and A.L Wolf. An architecture-based approach to self-adaptive software. IEEE Intelligent Systems, 14(3):54–62, May-June 1999.

[7] S.-W. Cheng et al. Using architectural style as a basis for self-repair. In IEEE/IFIP Conference on Software Architecture, 2002.

[20] James M. Purtilo. The polylith software bus. ACM Transactions on Programming Languages and Systems (TOPLAS), 16(1):151–174, 1994.

[8] C. Fetzer and Z. Xiao. An automated approach to increasing the robustness of C libraries. In International Conference on Dependable Systems and Networks, 2002.

[21] V. Stanford. Using pervasive computing to deliver elder care. IEEE Pervasive Computing, 1(1):10 – 13, 2002.

[9] M. Fleck, M. Frid, T. Kindberg, E. O’Brien-Strain, R. Rajani, and M. Spasojevic. From informing to remembering: ubiquitous systems in interactive museums. IEEE Pervasive Computing, 1(2):13 – 21, 2002.

[22] Y. M. Wang, W. Russell, and A. Arora. A toolkit for building dependable and extensible home networking applications. In Proceedings of the 4th USENIX Windows Systems Symposium, August 2000.

[10] X. Fu, W. Shi, A. Akkerman, and V. Karamcheti. CANS: Composable, Adaptive Network Services Infrastructure. In USENIX Symposium on Internet Technologies and Systems (USITS), March 2001.

[23] M. Weiser. Hot topics – ubiquitous computing. IEEE Computer, 26(10):71 – 72, October 1993. [24] M. Weiser, R. Gold, and J. S. Brown. The origins of ubiquitous computing research at PARC in the late 1980s. IBM Systems Journal, 38(4):693–693, 1999.

[11] M.M. Gorlick and R.R Razouk. Using weaves for software construction and analysis. In 13th International Conference on Software Engineering, pages 23–34, 1991.

[25] M. Welsh, D. Culler, and E. Brewer. SEDA: An architecture for well-conditioned, scalable internet services. In Proceedings of the Eighteenth Symposium on Operating Systems Principles (SOSP-18), October 2001.

[12] C. Hofmeister, E. White, and J. Purtillo. Surgeon: a packager for dynamically reconfigurable distributed applications. IEE System Engineering Journal, 8(2):95–101, March 1993. 8

Suggest Documents