COMMAG_GUEST_EDIT-Mishra_Guest Editorial 10/28/13 2:45 PM Page 22
GUEST EDITORIAL
CLOUD NETWORKING AND COMMUNICATIONS
Amitabh Mishra
Join the online discussion group for this Feature Topic here: http://community.comsoc.org/forums/commag-features-and-series
I
EEE Communication Magazine published a feature topic on cloud computing in September 2012. This Special Issue on the same topic recognizes the proliferation of cloud networking and its importance in our lives today. Cloud architectures are continuously evolving, driven by challenges in multiple areas, for example: • Scalability of computing, storage, and bandwidth resources • Capacities of switches, routers, and gateways • Efficient utilization of resources • Dynamic service creation • Competitive cost structures for users and service providers • Reliability of applications and hardware • Privacy of user, data, and several other areas These challenges arise as a typical cloud consists of tens of thousands of servers, exabytes of storage, terabits per second of bandwidth, and tens of hundreds of users at a time. Creation of such a massive infrastructure would not have been possible without the advancements in virtualization of computing, storage, and networking resources. Virtualization becomes a necessity when hundreds or thousands of user requests arrive dynamically to a cloud, each having different storage, compute, and bandwidth demands. In such a dynamic environment, physical resource provisioning and management of client requests are impractical, if not altogether impossible. Software defined networking (SDN) is a new networking architecture paradigm that is designed to use standardized application programming interfaces (APIs) to allow network programmers to define and reconfigure means through which resources and data are managed within a network. This ability to reprogram the switches on the fly was not possible before, because transmission and control planes of networks were independent. Generally speaking, the transmission of data was handled by dedicated switches or routers that forwarded packets between servers and
22
other connected devices. The control plane of switches, which is responsible for creating the routing tables that determine how packets are forwarded to the destinations, and other management functions such as connection, performance, and fault management, were separate from the data plane. The SDN has coupled the control and data planes of a switch by means of APIs that are used to program the network in pulling data and reconfiguring the resources from any connected device on the network. In essence, SDN is a three-tiered stack in which applications and high-level instructions are at the top layer, a controller sits in the middle controlling the data traffic, and the third tier resides at the bottom containing switches and other networking infrastructure. The controller controls the applications with APIs known as “northbound APIs” and the networking platforms via another set of APIs collectively known as “southbound APIs.” SDN has become inseparable from virtualization, and therefore from cloud computing. The first and fourth articles of this issue provide an excellent introduction to SDN. While data centers becoming energy hogs may not be classified as a communication or networking challenge for clouds, it has become one of the critical issues for containing the operational costs of data centers. An interesting fact related to energy consumption in data centers is that energy is needed to keep the servers running, but far more energy is needed to keep the servers cool enough to preserve the reliability of computations and contain the failure rates of the hardware itself. As per recent estimates, data centers are consuming more than 1 percent of total world electricity consumption and emit carbon dioxide levels equal to that of of all Argentina. Keeping the energy costs of cooling the data centers down has emerged as a major challenge facing clouds. The first article in this feature topic “Network Virtualization and Software Defined Networking,” provides a historical perspective on virtualization in computing right
IEEE Communications Magazine • November 2013
COMMAG_GUEST_EDIT-Mishra_Guest Editorial 10/28/13 2:45 PM Page 23
GUEST EDITORIAL from the definition of virtual memory for storage and server systems, virtual local area networks (VLANs), and virtual private networks (VPNs), and draws our attention to the ongoing renewed interest in network virtualization, which is now considered a prerequisite to realizing the full potential of cloud computing. As we know, a small computer network consists of an Ethernet switch to which the network interface cards (NICs) of all hosts belonging to that network are connected. This constitutes a layer 2 (L2) network which can grow to a much larger network when multiple L2 segments are connected to each other via a bridge thus forming a subnet. Multiple L2 subnets connected to a router form a layer 3 (L3) network, which we call an IP network. A collection of several IP networks constitutes the Internet as we know it. It turns out that a cloud may involve several L2 segments and multiple routers if several data centers located in different geographical areas are part of the same cloud. In a typical scenario, when a client application is represented by one or more virtual machines that may be running on different servers belonging to different subnets, a need for virtualization of NICs, L2 networks, L3 networks, L3 routers, the data centers, and even the Internet becomes necessary. This article provides excellent coverage of network virtualization issues. The Open Application Delivery Network (OpenADN) is a new session-layer mechanism that the authors of the first article have developed by extending SDN features to support requirements of application traffic such as application delivery policies, performance, context, and security, among others. OpenADN proposes to use SDN to coordinate and control the packet forwarding policies in the routers of application service providers (ASPs) and Internet service providers (ISPs) to be consistent with the needs of the applications. OpenADN can be very effective in handling applications that are popular across the globe and are rendered via the servers of ASPs and ISPs located in different parts of the world. The second article of this Special Issue, “Utilization of Data Center Networks,” discusses different switching topologies that are popular in data center networks today, and the role routing algorithms can play in achieving maximum throughput in these topologies. The article considers fat-free, flattened butterfly, and dragonfly topologies for examining well-known routing protocols such as Shortest Path Routing (SPR), SPR using node degree (SPRm), Equal Cost Multi-Path (ECMP), Valiant Balanced Routing (VBR), and Optimized Load Balanced Routing. The optimized load balanced routing algorithms are defined in terms of either balancing the flows or number of packets over a route. Flow-based routing has been shown to deliver packets unevenly across the multiple links in the network compared to packet-based routing. The analysis results included in the article suggest that load balanced equal cost routing achieves the highest throughput in terms of packets per second in the lossy and loss-free networks compared to other algorithms.
IEEE Communications Magazine • November 2013
The third article of this issue, “Toward Efficient Data Access Privacy in the Cloud,” focuses on the data privacy challenges that arise when sensitive data is stored in public clouds because standard data privacy techniques such as encryption are not sufficient to provide data privacy. It has been shown that it is possible to infer a lot more information about the encrypted content based on the frequency and/or the specific patterns that are used to access the data, and therefore, the access patterns to a specific data must also be protected. There are two modes, known as oblivious RAM (ORAM) and private information retrieval (PIR), that are commonly used to hide user access patterns from a server. Because PIR requires a few server side computations to conserve the privacy as opposed to comparing the key value attributes — a conventional practice — ORAM is favored over PIR because ORAM hides the target of each individual query by making data accesses cryptographically indistinguishable from each other. The article presents challenges associated with ORAM and describes the algorithm the author has developed to overcome these. The fourth article of this Special Issue, “Resource Allocation in a Network-Based Cloud Computing Environment: Design Challenges,” discusses the challenges that arise while allocating computing and networking resources to a client request by considering the performance, bandwidth, and energy expenditure constraints from the client and service provider perspective. The article presents a survey of recent resource allocation models for data centers that have appeared in the literature in the recent past. We would like to thank all authors who submitted manuscripts to this Feature Topic and the reviewers for their wisdom in helping select the four articles that are included here. This issue would not have been possible without the support of former and current Editor-in-Chiefs Drs. Steve Gorshe and Sean Moore, and assistance from publication staff, particularly Joseph Milizzo and Jennifer Porcello.
BIOGRAPHY A MITABH M ISHRA [SM] (
[email protected]) is a faculty member in the Information Security Institute of Johns Hopkins University in Baltimore, Maryland. His current research is in the area of cloud computing, data analytics, dynamic spectrum management, and data network security. In the past he has worked on the cross-layer design optimization of sensor networking protocols, media access control algorithms for cellular ad hoc interworking, systems for critical infrastructure protection, and intrusion detection in mobile ad hoc networks. His research has been sponsored by NSA, DARPA, NSF, NASA, Raytheon, BAE, APL, and the U.S. Army. In the past, he was an associate professor of computer engineering at Virginia Tech and a member of technical staff with Bell Laboratories working on the architecture and performance of communication applications running on 5ESS switches. He received his B. Eng. and M. Tech. degrees in electrical engineering from the Government Engineering College, Jabalpur, and the Indian Institute of Technology, Kharagpur. He also obtained M.Eng. and Ph.D. degrees in electrical engineering from McGill University, and an M.S. in computer science from the University of Illinois at Urbana-Champaign. He is a member of ACM and SIAM. He has written 80 papers that have appeared in various journals and conference proceedings, and holds five patents. He is the author of a book, Security and Quality of Service in Wireless Ad Hoc Networks (Cambridge University Press, 2007) and is a Technical Editor of IEEE Communications Magazine.
23