Toward internet distributed computing - IEEE Xplore

4 downloads 2329 Views 305KB Size Report
needed to make the Internet an application-hosting platform. This would be a networked, distributed counterpart of the hosting environment that traditional.
COVER FEATURE

Toward Internet Distributed Computing The confluence of Web services, peer-to-peer systems, and grid computing provides the foundation for Internet distributed computing—allowing applications to scale from proximity ad hoc networks to planetary-scale distributed systems.

Milan Milenkovic Scott H. Robinson Rob C. Knauerhase David Barkai Sharad Garg Vijay Tewari Todd A. Anderson Mic Bowman Intel

38

T

he World Wide Web’s current implementation is designed predominantly for information retrieval and display in a humanreadable form. Its data formats and protocols are neither intended nor suitable for automated machine-to-machine interactions without humans in the loop. Emerging Internet uses—including peer-to-peer (P2P)1 and grid computing2—provide both a glimpse of and the impetus for evolving the Internet into a distributed computing platform unprecedented in scale. Taking a longer view, we consider what would be needed to make the Internet an application-hosting platform. This would be a networked, distributed counterpart of the hosting environment that traditional operating systems provide to applications within a single node. Creating this platform requires adding a functional layer to the Internet that can allocate and manage the resources necessary for application execution. Given such a hosting environment, software developers could create network applications without having to know at design time the type or number of nodes the application will execute on. With proper support, the system could allocate and bind software components to the resources they require at runtime, based on resource requirements, availability, connectivity, and system state at the actual time of execution. In contrast, early binding tends to result in static allocations that cannot adapt well

Computer

to resource, load, and availability variations, thus the software components tend to be less efficient and have difficulty recovering from failures. The foundation of our proposed approach is to disaggregate and virtualize individual system resources as services that can be described, discovered, and dynamically configured at runtime to execute an application. We postulate that such a system can be built as a combination and extension of Web services, peer-to-peer computing, and grid computing standards and technologies. It thus follows the successful Internet model of adding minimal and relatively simple functional layers to meet new requirements while building atop already available technologies. We do not, however, advocate an “Internet OS” approach that would provide some form of uniform or centralized global-resource management. We believe that several theoretical and practical reasons make such an approach undesirable, including its inability to scale and the need to provide and manage supporting software on every participating platform. Instead, we advocate using mechanisms that support spontaneous, dynamic, and voluntary collaboration among entities with their contributing resources.

DISTRIBUTED COMPUTING’S POTENTIAL System designers have known for years that distributed computing offers many benefits, including the following:

Published by the IEEE Computer Society

0018-9162/03/$17.00 © 2003 IEEE

• resource sharing and load balancing, providing efficient and responsive resource utilization; • information sharing, permitting remote data access; • incremental growth, delivering added capacity when and where needed; • reliability, availability, and fault tolerance, achieved through redundancy and dynamic allocation; and • enhanced performance potential, derived from parallel operation. However, attaining these benefits in practice has remained elusive for a variety of reasons.

Environment and drivers The Internet’s maturity and unprecedented popularity provide nearly ubiquitous computer connectivity in terms of both physical connections and platform-independent, interoperable communications protocols. The confluence of several technological developments and emerging usage modes provides motivation for evolving the Internet into a distributed computing platform. Streamlined e-business, with its increasing reliance on automation, depends on direct, automated machine-to-machine transactions. Moore’s law promises a continuing supply of compute capacity that will form the foundation for distributing intelligence throughout the Internet. Other factors motivating Internet distributed computing (IDC) design include a desire to use computing resources efficiently and hide the complexity inherent in managing heterogeneous distributed systems. These factors lead to a reduced total cost of ownership. The emergence of utility computing, autonomic computing,3 and massively parallel applications—such as peer-to-peer grid-like formations that aggregate resources from millions of personal computers—demonstrates the need for and feasibility of large-scale resource sharing.4 The evolution of IDC will likely include pervasive5,6 and proactive computing. The pervasive computing vision postulates an order-of-magnitude greater-scale Internet composed of such diverse entities as sensors, cars, and home appliances. Increased scale will require ad hoc opportunistic configurations, as well as expanded naming and addressing schemes, such as IPv6. It will also force application designs that consider intermittent node availability a default operating assumption rather than a failure mode.

Requirements Key requirements for distributed computing in

the environment we describe include support for heterogeneity and the ability to scale from a proximity-area network’s relatively few devices to many devices up to a global scale. The explosive developments in wireless technology make support for mobility another important requirement. These developments will continue to result in a significant increase in the number of devices and the form factors that need to connect to each other and to services and data.

The future evolution of Internet distributed computing will likely include pervasive and proactive computing.

PROPOSED APPROACH Our approach attempts to identify key design principles and layers of abstraction that must be provided to give the Internet network-application-hosting capability. Following that, our implementation strategy builds on and reuses as much of the work in related standards and emerging technologies as possible. Architecturally, the central idea is to virtualize resources that a node may want to share, such as data, computation cycles, network connections, or storage. The system adds available resources to the networked pool from which the resources needed to complete a given task—such as executing an application—can be aggregated dynamically. An aggregated collection of resources operates as an assembly only for the period required to complete the collective task, after which the resources return to the network pool. The network can be either public or private, as long as it uses Internetstandards-compliant protocols and mechanisms for internode communication and other services. We assume that nodes willing to share a certain subset of their resources use mechanisms to announce their availability, possibly stating the terms of usage, to the rest of the distributed system or systems in which they are willing to collaborate. Individual nodes may be motivated to share resources for several reasons, such as to provide or gain access to unique data or to trade temporarily underutilized resources for profit or for the benefit of being able to draw upon the collective resources to handle their own peak loads. In addition, a company or an organization can mandate a certain modality of resource sharing to improve its overall asset utilization and realize higher effective aggregate computational power. Even though this nascent area has only a few real-life implementations, several benefits have already been observed in practice. For example, researchers performed the largest computation on record as of this writing—with 1.87 × 1021 floatMay 2003

39

Aggregation/orchestration (policy, autonomy, systemic dependencies, …) Dynamic configuration and binding (runtime) Discovery (publishing, description, …) Resource virtualization and management (abstraction, naming, sharing, …) Communication (connectivity, messaging)

Node OS (server) Local resources

...

Network application-hosting environment

Security System management, policies Reliability, availability, scalability

Applications

Node OS (client) Local resources

Figure 1. Architectural concept and component stack. Architecturally, this proposal virtualizes resources that a node may want to share. It then adds available resources to the networked pool, from which resources needed to complete a given task can be allocated and aggregated dynamically.

ing point operations—by aggregating the resources of millions of PCs whose owners donated their machines’ unused cycles to a scientific project.4

IDC design principles We believe that two key design principles go far toward meeting the requirements of IDC: embedding intelligence in the network and creating selfconfiguring, self-organizing network structures. Other researchers’ experience and our own experiments indicate that distributing intelligence throughout the network tends to improve scalability. The self-organizing and self-configuring aspects contribute to both scalability and resiliency by creating ad hoc networks of currently available nodes and resources. Complex and rigid fixed configuration networks are neither scalable nor manageable enough for use in linking wireless devices that move in and out of range and must power down intermittently to conserve energy. Improved scalability and performance result largely from the preference for use of local resources. Using local resources tends to shorten communication distances between the data source, its point of processing, and optional presentation to the user. This, in turn, results in reduced latency and a bias toward consuming edge bandwidth rather than the backbone bandwidth needed to communicate with remote centralized servers—a behavior typical in client-server systems. Further, this trend is especially valuable in short-range wireless network configurations such as Bluetooth, 802.11x, and the emerging ultrawideband technologies. These configurations can benefit from high-bandwidth, low-latency communication within a cell or in close proximity to it. Moreover, infusing intelligence into the network also 40

Computer

tends to distribute the load across many processing points, as opposed to creating congestion at a few heavily used hot spots.

Network services and abstractions Figure 1 shows the key network services and abstraction layers necessary to provide an application-hosting environment on the Internet. Resource virtualization. Although resource virtualization is a generally useful concept, we concentrate primarily on its use for resource sharing in distributed systems. In that context, resource virtualization can be thought of as an abstraction of some defined functionality and its public exposure as a service through an interface that applications and resource managers can invoke remotely. We consider a service to be a virtualized software functional component. Services can be advertised and discovered using directories and inspection. Once discovered, an invoking entity can bind to the selected service and start communicating with its externally visible functions, preferably via platform-independent protocols. Each such virtualized component can be abstracted, discovered, and bound to. These concepts can be extended to virtualize hardware resources, such as compute cycles, storage, and devices. This deceptively simple extension transforms the software component model into a distributed component model of immense power that may not be immediately obvious. Carrying its application to a logical extreme, we can envision a planetary-sized pool of composable hardware and software resources connected via the Internet that is described and accessible using service abstractions. Resource discovery. Discovery is a fundamental IDC component, simply because the system must find a service before it can use it. Traditional static systems often resolve discovery implicitly or fix it at configuration or compile time. To accommodate both mobile and fixed-location but dynamically aggregated environments, IDC must support a flexible service advertisement-anddiscovery mechanism. Applications can discover services based on their functionality, characteristics, cost, or location. Dynamic discovery enables devices to adaptively cooperate and extend functionality so that the whole becomes, ideally, greater than the sum of its parts. Dynamic configuration and runtime binding. Dynamic configuration depends upon the capability to bind components at runtime, as opposed to design or link time. Deferred or runtime binding can be implemented with the assistance of service discovery mecha-

nisms. Its primary benefit is decoupling application design from the detailed awareness of the underlying system configuration and physical connectivity. In effect, dynamic configuration facilitates application portability across a wide range of platforms and network configurations. It also allows decoupling of development of the service-user code from the service-provider code, unlike the tight, tandem development with distributed shared logic that is inherent in client-server applications. Runtime binding also enables desirable system capabilities such as • load balancing, by binding to a least-loaded service from a functionally equivalent group; and • improved reliability, by binding to a service explicitly known to be available at invocation. In ad hoc and self-configuring networks, runtime binding facilitates adaptive peer configurations in settings with high node-fluctuation rates, such as 802.x hot spots or clusters of wireless sensors that power down intermittently to conserve power. Resource aggregation and orchestration. We define resource orchestration as the control and management of an aggregated set of resources for completing a task. The term also includes the communication and synchronization necessary for coordination and collation of partial results. Once a task completes, the system can release resources back to the pool for allocation to other uses. Operating systems commonly use this approach, which IDC extends to the network scale. Using runtime resource aggregation to meet application requirements implies that applications must be designed so that they can state their resource requirements explicitly—or at least provide hints to the execution system that let it generate a reasonably efficient estimate.

IDC building blocks Mindful of the successful Internet model that builds on existing standards and available technology, we have sought areas that could provide ideas, technology, or standards to reuse or adapt for IDC implementation. A combination of Web services, peer-to-peer systems, and grid computing provides a useful and powerful collection of building blocks. Web services. Web services7 are self-contained, loosely coupled software components that define and use Internet protocols to describe, publish, discover, and invoke each other. They can dynamically locate

and interact with other Web services on the Internet to build complex machine-to-machine Mindful of the programmatic services. These services can be successful Internet advertised and discovered using directories and model built on registries such as the Universal Description, 8 existing standards Discovery, and Integration (UDDI) specification or inspected using the Web Services and technology, Inspection Language. we have sought To be useful, each discovered component areas that could must be described in a well-structured manprovide reusable ner. The Web Services Description Language (WSDL) provides this capability, although ideas, technology, deriving semantic meaning from a service preor standards. sents some practical difficulties. The semantic Web initiative9 addresses this problem. Using WSDL, an invoking entity can bind to the selected service and start communicating with its externally visible functions via advertised protocols such as SOAP. XML, which Web services use extensively, supplies a standard type system and wire format for communication, an essential requirement for platform-independent data representation and interoperability. These concepts can be extended to hardware to virtualize resources such as compute cycles, storage, and devices. Web services can provide some key IDC ingredients such as service description, discovery, and platform-independent invocation using either remote procedure calls or message-based mechanisms. They also provide support for runtime binding. Peer-to-peer computing. Providing both a design approach and a set of requirements, P2P computing exploits the aggregate compute power that millions of networked personal computers and high-end personal digital assistants provide. P2P has been used for applications such as file backup, content distribution, and collaboration—all without requiring centralized control and management. Decentralization and access to the vast resources distributed at the edge of the Internet usually motivate P2P use. In today’s P2P wide-area network, data and services are replicated widely for availability, durability, and locality, and thus provide suitable building blocks for IDC. P2P solutions must deal with problems inherent in vastly heterogeneous collections of personal machines. These include intermittent node availability, frequent lack of Domain Name System IP addressability stemming from dynamically assigned and translated IP addresses, and difficulties with bidirectional communication through network address translation (NAT) units and firewalls. May 2003

41

Earlier and Related Work in Distributed Computing Web services,1 peer-to-peer systems, and grid computing2 represent recent work on elements of distributed computing, the origins of which stretch back to the days of simple message passing, when communication primitives among components simply moved bits from one machine to another with some form of process synchronization.3 Later abstractions integrated the communication into the application with relatively transparent primitives for remote procedure calls,4 object method invocations,5 and bulletin boards.6 Distributed computing became the de facto standard for building three-tier business systems using atomic transactions to tie back-end databases to the business logic and interface.7 Proponents frequently describe potential large-scale, widely distributed, multiorganization uses for distributed computing. P2P-based environments such as Past8 and JXTA9 are examples of such systems. Historically, however, distributed computing has been most successful when used in local networks within a single organization. Implementation inconsistencies and tightly controlled or proprietary standards drove application development toward single-vendor systems. Complexities in security and discovery infrastructure, while preserving local autonomy, proved a significant barrier for building distributed applications that spanned organizational boundaries. Lack of adequate management tools also limited the scale and adaptability of distributed systems. References 1. W3C Architecture Domain, Web Services Activity; http://www. w3.org/2002/ws/. 2. Global Grid Forum; http://www.ggf.org/. 3. D. Cheriton, “VMTP: A Transport Protocol for the Next Generation of Communications Systems,” Proc. SIGCOMM 86, ACM Press, 1986, pp. 406-415. 4. A. Birrell and B.J. Nelson, “Implementing Remote Procedure Calls,” ACM Trans. Computer Systems, Feb. 1984, pp. 39-59. 5. Object Management Group, CORBA 2.3.2, The Common Object Request Broker: Architecture and Specification, Oct. 1999; http://www.omg.org/ docs/formal/99-10-07.pdf.

6. D. Gelernter, “Generative Communication in Linda,” ACM Trans. Programming Languages and Systems, Jan. 1985, pp. 80-112. 7. A. Spector et al., “Camelot: A Flexible, Distributed Transaction Processing System,” Proc. 33rd IEEE Comp. Soc. Int’l Conf., IEEE CS Press, 1988, pp. 432-437.

8. P. Druschel. and A. Rowstron, “Past: A Large-Scale, Persistent Peerto-Peer Storage Utility,” Proc. 8th IEEE Workshop on Hot Topics in Operating Systems, IEEE CS Press, May 2001, pp. 75-80. 9. L. Gong, “JXTA: A Network Programming Environment,” IEEE Internet Computing, May/June 2001, pp. 88-95.

42

Computer

P2P provides a distributed computing paradigm in which participating machines usually act as equals that both contribute to and draw from the resources shared among a group of collaborating nodes. Often, these peers participate from the network’s edge instead of its center, where specialized or dedicated compute servers reside. These fringe end-user machines can dynamically discover each other and form an ad hoc collaborative environment. Thus, many existing P2P systems address and solve naming, discovery, intermittent connectivity, and NAT and firewall traversal issues, but do so in proprietary ways that preclude them from direct incorporation into IDC. An IDC system can mimic P2P techniques to form overlay networks that provide location-independent routing of messages directly to the qualifying object or service, bypassing centralized resources and using only point-to-point links. These overlay networks can distribute digital content to a large Internet user population more cost-effectively than can simple client-server techniques alone. Grid computing. The term grid comes from the notion of computing as a utility, and it derives from the analogy to a power grid as a pool of resources aggregated to meet variations in load demand without users’ awareness of or interest in the details of the grid’s operation. Grid computing extends conventional distributed computing by facilitating large-scale sharing of computational and storage resources among a dynamic collection of individuals and institutions. Such settings have unique scale, security, authentication, and resource access and discovery requirements. Grid computing’s origins and most of its current applications lie in the area of high-performance computing. Initially, grid technologies’ primary beneficiaries were scientists who wanted access to large data sets or unique data sources or who wanted to aggregate mass-produced compute power to form comparatively inexpensive virtual supercomputers. As grid technologies matured, it became evident that the problems being addressed and the techniques being developed could apply to a broader range of computing problems. The subsequent inclusion of Web services exemplified by the Open Grid Services Architecture (OGSA)2 provides a sound basis for cross-organizational and heterogeneous computing. Grid computing provides both architectural solutions and middleware for resource virtualization, aggregation, and related abstractions. Higher layers of grid computing, not described here, provide mechanisms and tools for application distribution and execution across a collection of machines.

PROTOTYPE AND PROOF OF CONCEPT As the “Earlier and Related Work in Distributed Computing” sidebar describes, grid and Web services concentrate on large-scale systems with fixed or slowly changing node populations, while P2P systems deal with somewhat more intermittent membership. Both areas provide several prototypes and concept proofs. We designed and implemented a prototype system to test the applicability and downward scalability of these design principles to smaller, proximity-area wireless networks with mobile clients.

Mobile challenges Web services, grid computing, and P2P computing share certain assumptions about the compute and communications environment. Each assumes “strong” Internet access: fast, low-latency, reliable, and durable connections to the network infrastructure. They also rely on predominantly static configurations of, for example, preconfigured grid node candidates or well-known UDDI servers and highly available Web servers. Currently, mobile devices can participate in these constituent communities only by mimicking a traditional network entity for the duration of a communication session. Mobile devices participate primarily by establishing, breaking, and reestablishing these well-behaved sessions, which relegates them to a second-class status.

Dynamic discovery requirements Mobile computing’s inherently nomadic nature —which includes moving within and between environments—poses some unique requirements, foremost among them the ability to locate available services and resources in or near a new location. Additionally, devices must be able to efficiently register their willingness to offer services to the local region, include their hardware constraints, and specify how long they will be available in that region. Two simple but greatly differing usage scenarios involving users with wireless mobile devices highlight the challenges to a dynamic discovery mechanism that mobile computing presents: • Sue meets several other numismatists at a flea market to exchange coin inventories stored on their respective devices and look for possible trading opportunities and other interactions. • Sue and other researchers meet at a working session in a conference room to review a reference design. They share a single display pro-

jector and laptop documents and launch last-minute performance simulations into the surrounding desktop PC grid.

Mobile computing’s inherently nomadic nature requires the ability to locate available services and resources in or near a new location.

The first scenario assumes no surrounding infrastructure. The ad hoc networking takes place among devices through direct discovery and inspection, without the benefit of structured network, routers, servers, or even power outlets.10 The second scenario combines a wireless networking infrastructure and a fixed enterprise infrastructure including a desktop PC grid. The group works behind a firewall within a more secure and trusted environment. From these scenarios we conclude that Sue’s mobile devices need telescoping security—she may not want to share documents from work with random people at the flea market, and she may want to reveal only certain parts of her personal data or coin collection database to trusted individuals she meets there. In contrast, in the office environment, the discovery mechanism must let workers spontaneously collaborate while finding and using local, fixed resources such as disarticulated I/O and remote compute resources. In both cases, discovery must be resourcefriendly and constructed to minimize power drains and network traffic for participating devices. Other obvious issues include service advertisement freshness, replication and federation of discovery directories, and service description and invocation semantics. A variety of methods—such as SLP, Salutation, UPnP, Jini, Bluetooth SDP, and UDDI— attest to the varied issues and needs that discovery addresses.11 To avoid repeating earlier work, we investigated the applicability of several existing service discovery protocols to IDC. Our evaluation included creating a requirements set from the IDC design goals. We then compared each service discovery protocol against those requirements. We first required that the discovery mechanism support both directory-oriented and directory-free operation modes. Directory-free methods tend to be more efficient in ad hoc or mobile environments, while directory-oriented solutions tend to be more efficient in a static environment. Second, we required that the mechanism be scalable in the extreme, from small ad hoc networks to the complete enterprise and beyond. This, combined with the need for directory-oriented operation, implies the need for arranging the available directory servers hierarchically.

May 2003

43

Discover and use services PDA

Notebook

Figure 2. Ad hoc networking implementation. For casual or impromptu meetings, a combination of mobile service discovery and direct peer interaction works well without any other supporting infrastructure.

Third, we required that discovery be platformindependent: It must be usable from any language and OS available in the IDC environment. Fourth, we required support for unreliable networks and transport protocols while allowing use of a reliable transport when available and appropriate. We also required that service registration to directory machines use soft-state principles to prevent those directories from filling up with outdated information. Fifth and finally, we required that the system permit discovery of both a service’s attributes and invocation interface.

IDC dynamic discovery prototype We prototyped a solution using Web service standards, expanding them where needed to meet our requirements for mobility support. Each device includes a description of the services it will share, in a format suitable for posting in the UDDI registry. To meet the needs of very small ad hoc networks, such as Sue’s walk-by scenario, devices can inspect each other’s service offerings after establishing network-level discovery and connectivity. We use WSDL and UDDI formats for convenience because most Web service clients can interpret them. For larger ad hoc and hybrid fixed or wireless networks, our prototype dynamically creates a UDDI-like local directory with proxy entries for all local, currently available services, which it stores at a local node. Each device has a local UDDI registry and server that contains the services it offers. The system filters service advertisements based on environmental parameters such as network media, location, and security. Although bootstrap discovery of the UDDI service on a given device can be accomplished in several ways, our prototype broadcasts queries over a well-known transmission control protocol port. For smaller devices that offer fewer services, custom-streamlined versions of the UDDI can maintain a list instead of a full database. Devices then query or inspect each other’s UDDI service direc44

Computer

tories. As Figure 2 shows, this mode of operation is suitable for ad hoc networking scenarios, such as Sue’s meeting in the park. Simple inspection does not scale well to larger concentrations of portable devices, since each arrival of a new device requires all others to consume power and bandwidth that the newcomer needs for inspection of their service descriptions. To accommodate power-constrained devices, whose batterylife-depletion rate depends on network traffic intensity, our architecture permits the election or predesignation of a local master directory (LMD), usually hosted by a node that has the best combination of durability, power, and connectivity. The LMD node aggregates service advertisements and can perform query resolution and service matching. We found the LMD concept critical for achieving scalability as well. An LMD might, for example, serve as a service lookup aggregator or proxy for fixed devices and infrastructure in the vicinity, such as the area around one or more Wi-Fi access points. To ensure scalability, the architecture comprehends a hierarchical LMD federation, an organization that might, for example, be used within an enterprise or campus. To accommodate localized scaling and long-distance discovery, we incorporated support for hierarchical searches, resolving queries at the point most local to the requestor. Figure 3 shows a sample hierarchical configuration that also includes conventional UDDI-like use on fixed servers.

e believe that IDC, or a similar variant, is poised to accelerate distributed computing on the Internet by providing an environment to aggregate, discover, and dynamically assemble computing structures on demand to perform a given task. Its underlying principles can be reincarnated at different scales in sensor, homearea, enterprise, and wide area networks. Indeed, IDC provides a foundation for pervasive computing from small-scale personal area networks to virtual, planetary-scale systems.12 Pervasive computing is often associated with the creation of smart environments that contain embedded computing resources. In such environments, mobile users will be able to carry a subset of physical computer resources, augmenting their computational, storage, and UI capabilities as required by dynamically aggregating resources found in the environment. Security and authentication, while beyond this article’s scope, are necessary for most practical

W

Device powers on/off to save battery life

Other networks/ Internet Fixed service/resource (in this vicinity)

Notebook PDA PDA

802.11b Hotspot

s ome ice c Dev d goes an

Notebook

Dynamically chosen LMD

Figure 3. Local master directory implementation in a dynamic hybrid environment. The LMD, a key component for scalability, aggregates local service advertisements and performs service matching and query resolution based on the number, type, and condition of machines in the locality.

applications. Resource sharing and aggregation across potentially distinct security domains and levels of trust necessitate protection of both the host and guest application. Moreover, application execution over a collection of components requires a single systemwide sign-on, as opposed to unwieldy authentication at each individual node. Although IDC design is neither final nor complete, the work of others and our early prototypes indicate that it provides a promising foundation upon which to build. Completing the task will take a collective effort and the Internet community’s combined wisdom. ■

Acknowledgments We thank Paul R. Pierce for numerous discussions of network service layering and a minimal set of useful abstractions. We also thank Lenitra M. Clay for struggling to make our prose more readable.

References 1. A. Oram, ed., Peer-to-Peer: Harnessing the Power of Disruptive Technologies, O’ Reilly & Associates, 2001. 2. I. Foster et al., “Grid Services for Distributed Systems Integration,” Computer, June 2002, pp. 37-46. 3. P. Horn, “Autonomic Computing: IBM’s Perspective on the State of Information Technology,” IBM Research White Paper, Oct. 2001; http://www. research.ibm.com/autonomic/manifesto/. 4. D.P. Anderson et al., “SETI@home: An Experiment in Public-Resource Computing,” Comm. ACM, Nov. 2002, pp. 56-61. 5. T. Kindberg and A. Fox, “System Software for Ubiquitous Computing,” IEEE Pervasive Computing, Jan.-Mar. 2002, pp. 70-81.

6. M. Satyanarayanan, “Pervasive Computing: Vision and Challenges,” IEEE Personal Communication, Aug. 2001, pp. 10-17. 7. S. Graham et al., Building Web Services with Java: Making Sense of XML, SOAP, WSDL and UDDI, Sams Technical Publishing, 2001. 8. Universal Description, Discovery and Integration, UDDI; http://www.uddi.org/. 9. T. Berners-Lee et al.,“The Semantic Web,” Scientific American, May 2001, pp. 34-43. 10. L. Feeneyet et al., “Spontaneous Networking: An Application-Oriented Approach to Ad Hoc Networking,” IEEE Communications, June 2001, pp. 176-181. 11. G.G. Richard III, “Service Advertisement and Discovery: Enabling Universal Device Cooperation,” IEEE Internet Computing, Sept./Oct. 2000, pp. 1826. 12. L. Peterson et al., “A Blueprint for Introducing Disruptive Technology into the Internet”; www.planetlab.org/pdn/pdn02-001.pdf.

Milan Milenkovic is a principal engineer and manager of virtual platform technologies at Intel Labs. His research interests include distributed computing, resource virtualization, and operating systems. Milenkovic received a PhD in electrical and computer engineering from the University of Massachusetts, Amherst. He is a senior member of the IEEE and the IEEE Computer Society. Contact him at [email protected]. Scott H. Robinson is a senior researcher at Intel Labs. His research interests include resource virtualization, distributed computing, mobile computing, and pervasive computing. Robinson received a PhD in electrical and computer engineering from Carnegie Mellon University. He is a member of the May 2003

45

IEEE and the IEEE Computer Society. Contact him at [email protected].

from the University of Connecticut. Contact him at [email protected].

Rob C. Knauerhase is a staff systems architect at Intel Labs. His research interests include mobile computing and communications, internetworking, system software, and information privacy in the digital world. Knauerhase received an MS in computer science from the University of Illinois at Urbana-Champaign. He is a senior member of the IEEE and the IEEE Computer Society. Contact him at [email protected].

Vijay Tewari is a software architect at Intel Labs. His research interests include distributed computing, managed runtime environments, resource virtualization, and systems management. Tewari received an MS in computer science from the University of Minnesota. Contact him at vijay.tewari@ intel.com.

David Barkai is a high-performance computing architect at Intel. His research interests include numerical techniques, high-performance computing systems, distributed and P2P systems, and parallel scientific applications. Barkai received a PhD in theoretical physics from the Imperial College of Science and Technology, University of London. Contact him at [email protected]. Sharad Garg is a modular-server architect at Intel. His research interests include modular storage servers, distributed computing, and distributed file systems. Garg received a PhD in computer science

46

Computer

Todd A. Anderson is a senior software engineer at Intel. His research interests include distributed file systems, protocol-based deconstruction of network devices, and object-oriented and modular programming constructs. Anderson received a PhD in computer science from the University of Kentucky. Contact him at [email protected]. Mic Bowman is the principal investigator for the Planetary Services strategic research project at Intel. His research interests include distributed computing, large-scale system management, and distributed query processing. Bowman received a PhD in computer science from the University of Arizona. Contact him at [email protected].