as OpenFlow in the network space [19], together with Open-. Stack and .... licence (Open Source). .... help desk system (CHD) to support its personnel in the case.
社団法人 電子情報通信学会 THE INSTITUTE OF ELECTRONICS, INFORMATION AND COMMUNICATION ENGINEERS
信学技報 TECHNICAL REPORT OF IEICE.
全システムオブジェクトの観測による仮想ネットワークのリソース割り 当ての改善手法の提案 マルティネスーフリアペドロ†
カフレべド†
原井
洋明†
† 情報通信研究機構、〒184−8795東京都小金井市貫井北町4−2−1 E-mail: †{pedro,kafle,harai}@nict.go.jp あらまし
本稿では、突発的なインシデントやシステム要求の動的変化に応じて柔軟にリソース割り当て調整を実
現するために、仮想ネットワーク環境に関連するあらゆる要素を観測・利用するアーキテクチャについて論じる。一 方で、現在の管理および制御のアプローチは、それら事象によって影響を受けるリソースの観測のみを考慮するも のであるため、リソース割り当てのストラテジーとしては不十分なものとなる。我々のアーキテクチャは、自律的 な制御によるアプローチに従い、事象の観測と制御目標の両方を意味論的に定式化して、制御システムを改善する ための特異な補正や完璧なアクションを推定するものである。 キーワード
ネットワーク、仮想化、リソース、コントロール、自律制御、適応制御、センサー、IoT
Exploiting Observations From All System Objects to Improve Virtual Network Resource Allocation Pedro MARTINEZ-JULIA† , Ved P. KAFLE† , and Hiroaki HARAI† † National Institute of Information and Communications Technology (NICT) 4-2-1, Nukui-Kitamachi, Koganei, Tokyo 184-8795, Japan E-mail: †{pedro,kafle,harai}@nict.go.jp Abstract In this paper we discuss an architecture that exploits observations produced by all elements related to a virtual network environment in order to perform the elastic adjustment of resource allocation in response to sporadic incidents and dynamic changes in system requirements. On the contrary, current management and control approaches only consider observations from the resources that are affected by such events, conducting to deficient allocation strategies. Our architecture follows an autonomic control approach and takes advantage of the semantic formulation of both observations and control objectives to somehow infer different corrective and perfective actions to improve the controlled system. Key words Network, Virtualisation, Resource, Control, Autonomic, Adaptation, Sensors, Things
1.
Introduction
They also permit building or modifying instantiated systems in hours/minutes instead of weeks/days [5].
The emergence of the digital world has pushed networks to
In this paper we discuss a novel architecture for control-
evolve from being just an accessory to an indispensable part
ling virtualised environments to reduce the time required to
of everyday lives [2]. As more people, devices, and services
perform corrective or adaptive changes from hours/minutes
are attached to different kinds of networks, more pressure
to seconds/milliseconds. It places its operations alongside the
is applied to their correct operation. The industry has re-
control plane, with which the present architecture cooperates
sponded by impulsing the well-known Cloud infrastructures
to quickly adjust allocated resources in prevention or response
and platforms [1]. They offer allocation elasticity to network
to events (incidents) and/or new requirements.
and computing resources, whose qualities can be defined ac-
automating resource control allows such systems to scale up
cording to the dynamic demands of the instantiated systems.
while lowering both CAPEX and OPEX with the improve-
Moreover,
ment of the overall system efficiency and reliability. —1—
The main hurdle that limits the responsiveness of current
sity of new management and control models, within which the
virtualised environments is that their allocation and dealloca-
autonomic aspects are highly valuated, as supported by the
tion operations required by the instantiated system have to be
MANA reference management model [11], which also remarks
ordered by human administrators. In the current ecosystem,
the establishment of a Knowledge Base (KB) alongside other
this limitation has to be overcome, so that such systems are
management functions. Some early proposals approached it
able to quickly address new requirements or react to incidents,
by introducing a common service bus by which all elements
such as natural disasters. The problem is aggravated by the in-
of the network would be controlled [16]. However, they are
creasing size and complexity of networks, spearheaded by the
largely lagged by the limitations of classic networks.
Internet of Things (IoT) [14]. In such circumstances, admin-
Targeting similar problems to those envisioned by this pa-
istrators have to manage individual elements by aggregating
per, we also find an approach that adapts network behav-
them to perform common operations [8] without considering
ior to user and service requirements within an identity-based
the particular situation and state of each element.
architecture [17]. It defines a new plane that allows com-
To achieve its goal, the architecture discussed in this paper
munication among the management and control planes to
uses the semantic description of elements, their relations, and
provide a holistic view of the network to support new net-
observations (events), together with Complex Event Process-
work services.
ing (CEP) mechanisms with the objective of correlating them
self-adaptation functions to the Recursive Internet-Network
to identify anomalous or undesired situations of the controlled
Architecture (RINA) [3], including knowledge management
system. Then it takes advantage of the elastic capabilities of-
and providing an autonomous model to analyze and clas-
fered by virtualisation infrastructures to enforce the required
sify network traffic.
proactive and/or reactive measures to overcome such situa-
directly applied to current networks and they do not con-
tions. This allows network administrators to abstract the in-
sider non-network resources, such as computing elements and
dividual aspects of the elements they are in charge of, exploit-
things/sensors. Nevertheless, they can interoperate with the
ing the global view of the system and establishing the rules
architecture discussed in this paper to achieve the autonomic
and policies it must accomplish.
control in some specific scenarios.
Moreover, we find a proposal to provide
However, these proposals cannot be
The remainder of this paper is organized as follows. First
Finally, we find some approaches that leverage Artificial
we discuss the related work in Section 2.. Then, we describe
Intelligence (AI) methods, such as applying Machine Learn-
our approach in Section 3. and particularize it with a poten-
ing (ML) techniques, to achieve resource selection and adap-
tial use case in Section 4.. Finally, in Section 5. we conclude
tation [20]. They have the advantage of finding patterns that
the paper and introduce some aspects of our next steps.
humans cannot, delivering more efficient and effective alloca-
2.
Related Work
tions. However, it is hard and time consuming for them to adapt to large changes in the controlled system and they can-
The evolution towards virtualised infrastructures provides
not easily deal with an arbitrary number of parameters. Nev-
enormous benefits to the flexibility and reliability of networked
ertheless, such approaches can coexist and also interoperate
systems, which are being already exploited, both functionally
with the architecture discussed in this paper.
and economically, by Cloud Computing technologies [5]. New mechanisms are therefore available to resolve common prob-
3.
Automation of Complex Operations
lems and improve the operation of networked systems, ac-
In this section we discuss how our architecture automates
companied with several research challenges [7]. In response,
complex control and management operations to reduce the
some architectures provide new layering methods to differenti-
time required for resource adaptation from hours/minutes to
ate infrastructure providers (lower layers) from virtual service
seconds/milliseconds. Instead of being unlinked from human
providers (higher layers) [18], so network operations are im-
administrators, our architecture relies on them to specify the
proved, as in the main topic of this paper.
control objectives as well as the actions that can be taken
We find those properties in some widespread solutions, such
if they do not behave as expected. It extends the operation
as OpenFlow in the network space [19], together with Open-
model defined by Autonomic Computing (AC) [12] with net-
Stack and OpenNebula in the virtual computing space [24].
work control functions, thus adopting the closed-loop model
These solutions have been enlarged into the broader Network
defined by MANA in the form of connected subprocesses.
Function Virtualisation (NFV) [15], which abstracts functions
As shown in Figure 1, our architecture includes a subpro-
from specific hardware or locations. However, the increasing
cess to implement each eactivity defined in MANA: a collector
complexity of those mechanisms still expose several manage-
that receives observations and events, an analyzer that pro-
ment and control challenges [22]. This emphasizes the neces-
cesses and correlates events to identify specific situations, a —2—
subjects and objects can be transient in the case of dynamic knowledge, or persistent in the case of static knowledge. Moreover, knowledge communication and processing are simplified by using Turtle [4] for knowledge serialization. All static knowledge is stored in the Knowledge Base (KB), including topologies, available resources, and executable actions. Despite of its name, the static knowledge will be updated to reflect structural changes in the system. Updates are achieved through actions triggered by detected events (e.g. detecting that a new link has been added to the system or that administrators have added more resources to the underlying pool). We opted to implement the KB with Apache Jena Fuseki [10]. It is a well-known and widespread RDF data store with an HTTP/REST interface and supports SPARQL [23] to Figure 1
Overview of the components and interactions of the architecture discussed in this paper.
query and update information. The performance and scalability qualities provided by Fuseki are sufficient for instances that control either individual or collective resources. However, for
decider that determines the actions to take (if needed), and
controlling really large infrastructures (Internet scale), whose
an enforcer that reflects the decisions into the environment
knowledge size and fragility surpasses the limits of a single
and resources. Administrators specify the behavior of the
storage node, it would be necessary to implement a highly
control system in the form of statements that correlate for-
distributed system, such as RDF peers [6].
mal situations (complex events) and perfective or corrective
3. 2
actions. Moreover, the closed control loop is achieved by con-
The architecture discussed in this paper, as shown in Fig-
tinuously checking that resources evolve as expected by those
ure 1, is organized in five components: the core, the collector,
statements, even after applying the specified actions, while
the analyzer, the decider, and the enforcer. Each component
semantic coherence in all elements and subprocess communi-
incorporates its own subprocess and is concentrated in its own
cations is achieved by using a common ontology.
task, as discussed below. Above all, the core of the engine
Components and Subprocesses
3. 1 Knowledge Management
(dashed box in the figure) encompasses and orchestrates the
Apart from the raw inputs and outputs of the control en-
other four elements, managing their message exchanges, as
gine, all information pieces managed by our architecture are
detailed within the overview of the workflow included below.
constructed by following a common ontology, so semantic co-
3. 2. 1
herence is ensured throughout the whole control process. It
Every element of the controlled system, regardless of it be-
exploits both the Resource Description Framework (RDF) [13]
ing a managed resource or not, can be configured to generate
and the Web Ontology Language (OWL) [21], which are
observations that are then addressed to the control engine
widely known standards for description, acquisition, and man-
through its event collection interface. Those range from the
agement of knowledge. The resulting ontology includes the
CPU load of of a computing instance to the measurement of a
necessary concepts to represent observations of any kind (dy-
sensor. The collector subprocess is in charge of receiving the
namic knowledge) as well as the specific details (description)
observations, transforming them according to the common on-
and relations (topologies) of the elements involved in the con-
tology, and enqueuing them to the analyzer subprocess.
trol operation (static knowledge), and it can be extended to represent any new situation, element, observation, etc.
3. 2. 2
Observations and Event Collection
Event Correlation and Analysis
Once observations are translated to RDF triples, they are
In the RDF data model, a set of three entities, called triple,
introduced into a Complex Event Processor (CEP), which is
is the smallest unit of knowledge (atom). A triple, as its name
configured with a set of precise statements that correlate the
indicates, has three entities: a subject, a predicate, and an ob-
knowledge atoms and find out behavioural patterns that reveal
ject. The most simple understanding of a single atom is that
specific situations. We chose to use Esper [9] to implement the
a subject states a predicate about an object. Complex knowl-
CEP. It provides a high performance engine with a permissive
edge statements can be formulated by combining an arbitrary
licence (Open Source). Moreover, it has a vast and wide com-
number of such atoms and, as described throughout this pa-
munity of users and integrators, and it is backed by a reputed
per, they can be correlated to find out (or infer) situations
company with plenty of experience supporting the integration
that are not directly expressed by them. In our architecture,
of Esper within many software projects. —3—
3. 2. 3
Decision and Knowledge Update
The enforcer translates the semantic actions to specific mes-
Unless the response from the event correlation has been for-
sages or proper API calls, as required by the underlying re-
warded to other stage of the control cycle, once the processing
source controller. Its translation mechanisms are modular, so
of the streamed observations have detected some situation, the
new interfaces can be incorporated to the system as required
decision subprocess enters into scene by matching them to the
by specific deployments, but all of them must “understand”
possible actions required to overcome them. Such actions are
the common ontology. Moreover, the action postconditions
related to functions provisioned alongside the core of our ar-
are sent back to the collector, so they are considered together
chitecture, which can be either attached to the enforcement
with new observations and the effects of the control cycle can
process described below or independent so they can feed other
be verified during next cycles. Finally, the core might send
stages of the control cycle like the event correlation or the de-
the actions from the decider to the analyzer or the decider
cision subprocess itself. Moreover, since all but the streamed
itself if specified in the content of the actions with a special
semantic statements are stored in the KB, this subprocess is
clause used to indicate the target of the action.
responsible of querying and updating it. 3. 2. 4
Action Enforcement
3. 4
Scalability
Long-term scalability is ensured by the high performance
As actions exit the decision subprocess, they are qualified
provided by the technologies involved in our architecture.
with a set of specific elements and which changes are required
They are able to process thousands of events in little time,
from them. It is not limited to the resources that triggered
which is essential to achieve fast resource adaptation. On the
the actions, so some elements can be affected by events with
other hand, we designed the present architecture to process
which they are not involved. Then, the enforcer subprocess
the observations as streams, meaning that they are not actu-
translates such changes to the interfaces offered by the tar-
ally stored in any structure that can be overflown by high-rate
get elements, applying the changes to effectively overcome
sources. However, if acceptance rate is not properly adjusted
the particular situation detected during the event processing
in the collection subprocess, the control system can fall into
stage. Moreover, they are fed back to the collector subprocess
congestion and responses lagged noticeably. Knowing the ac-
so the whole system is aware of the changes requested to the
ceptance rate limit is a key goal objective of our current re-
environment, therefore closing the control loop.
search work. Nevertheless, our architecture supports the in-
3. 3 Component Workflow Overview
stantiation of multiple control layers to facilitate the control
The workflow of our architecture, as shown in Figure 1, be-
of huge and complex systems.
gins when some observations are received through its input interfaces. The core enqueues them while the collector pro-
4. Target Use Case
cesses the queues to transform them into semantic data guided
As discussed throughout this paper, the main objective of
by the common ontology, using the corresponding translator
our architecture is to achieve the fast allocation and deallo-
to the source of such observations. Then, those semantic items
cation (adaptation) of network and computing resources in
are sent to the analyzer. It has been configured with a set of
virtualised environments. We have chosen to target an emer-
statements that correlate observations in order to find some
gency use case to demonstrate the necessity of such functions
aspect of the managed system, such as detecting that the load
because its representativeness of fast adaptation. It begins
of resource is overpassing some threshold. When some state-
with a basic scenario where a big organization with several
ment is triggered by the analyzer, its result is returned back
dependencies (buildings, sites) is setting up a computerized
to the core, which checks it to find out the next step.
help desk system (CHD) to support its personnel in the case
Some analysis statements may require that their results are
of an important incident (big earthquake) occurs.
re-introduced to the analyzer for further considerations. Oth-
The system is being deployed on the headquarters (HQ) of
erwise, and most typically, the core will enqueue them to be
the organization and some sensors (seismometers) are being
processed by the decider. This also has a set of statements
distributed among the sites of the organization and configured
that receive the situations identified by the analyzer (events)
to report their records to the CHD. The operation behavior
and determines the best actions that overcomes them. This
of the scenario differentiates four different global situations: • Normal operation: The CHD is working normally
is performed by querying the KB, using SPARQL, to find out ation as precondition and a stable situation as postcondition,
with very low traffic and requests. • Incident: A big earthquake occurs and makes the seis-
which is one of the “goals” established by the administrators.
mometers to record it and notify the CHD. The tremors have
Then, this action is sent back to the core to be enqueued for
broken a physical link between the HQ and one of the other
the enforcer to communicate them to the infrastructure.
sites of the organization.
which semantic construction (action) has the mentioned situ-
—4—
As depicted in Figure 2, the CHD is deployed on a virtual infrastructure using virtual computing resources that can be instantiated on demand and connected to a virtual network that is dedicated for this system. Moreover, the control engine is deployed alongside the virtual system, making computing and network controllers to notify it about the resource consumption (load, message exchange rate, etc.) and configuring the seismometers (and maybe other sensors) to also report their observations to it. When the incident (earthquake) occurs, as depicted in Figure 3, the notifications sent by the seismometers arrive to Figure 2
Target use case: Element interactions, normal operation.
the control engine. The affected link also affects the system because it embeds one of more links of the virtual network. However, once the engine analyzes the observations from the seismometers, it guesses a possible increment of load in the network and proactively requests to increase the resources allocated to the CHD. Moreover, the engine detects the missing links of the virtual network and requests the underlying network controllers to establish new flows to replace them. The engine also detects that, even though it had requested to increase computing resources, their load is reaching some threshold, so it requests the deployment of more resources in locations closer to the most demanding users. Once the environment returns to normality, the engine detects that some
Figure 3
Target use case: Additional computing resources and new path/flow allocated after the incident occurred.
allocated resources are underused and requests the infrastructure controllers to dispose them. However, resources are disposed at a lower rate than allocated to reduce response time
•
Aftermath: Many people (hundreds/thousands) try
in case a flood of requests comes back suddenly.
to contact the CHD through its website and automated call
Apart from reducing the response time, which is evident
center to notify their state and get solutions to their issues.
in the discussed use case and desirable for most network in-
The CHD and one of the remaining links get overloaded. • Back to normality: After some time the emergency
frastructures, this scenario also justifies the need of consider-
situation is alleviated: The load of the CHD decreases so it
management process. Moreover, it demonstrates the enor-
can attend its requests normally. The broken link gets fixed
mous benefit for the management and control process obtained
so the CHD regains lost connectivity.
by the interaction with things, especially sensors. This allows
Current approaches cannot work on this scenario because they
the control engine to be aware of situations happening outside
are unable to react to such kinds of events within the time
the digital environment that could affect it.
ing both network and computing resources in the control and
required. Moreover, static approaches are difficult to adapt
In summary, the main benefits of our architecture applied to
without involving physical elements (e.g. adding or removing
the present scenario are then to ensure the efficient use of re-
computing machines). Furthermore, those approaches require
sources, effectively reducing the CAPEX of the emergency sys-
to assign resources beforehand, which limits the efficiency of
tem, while reducing the involvement of human administrators,
the architecture. Overallocation of resources does not ensure
so OPEX is also reduced. Considering these benefits, the main
that the problem is avoided because regardless of the quantity
stakeholders of our architecture would be organizations that
of resources, the system might suddenly need more. A proper
have (or are planning) small data-centers distributed among
solution would be to virtualise the network and computing
their sites, such as universities and other public organizations,
infrastructure of the organization and deploy the CHD elas-
which will be highly benefited from the virtualisation of their
tically, so it can live alongside other systems and be scaled
infrastructure and, thus, the efficient use of their resources.
up and down to adjust its power to dynamic requirements.
Moreover, infrastructure providers would also be interested
However, this solution is not enough to deliver fast response
to offer fine control to their customers on the resources they
to such delicate scenario, so our architecture enters into scene.
consume. —5—
5.
Conclusions and Future Work
In this paper we have demonstrated how it is possible to achieve the automatic adaptation of computing and network resources in a fast way by using semantically-driven complex event processing, with the final target of raising both reliability and efficiency of networked systems deployed on virtualised infrastructures. We have thus introduced a control architecture that separates event processing and semantic matching, so each element is specialized into a different task. This allows this architecture to perform such adaptation operations within a very restricted time frame, which supports our performance objectives. For the next step of our research, as mentioned throughout this paper, we plan to investigate how these results can be applied to resolve the whole picture discussed in Section 4.: reducing the time elapsed from the moment an incident takes place until the virtualised system and its related resources are actually adapted. This means to find and resolve other issues in the way of building a production-oriented control system, such as how to place the resource control engine alongside other control plane mechanisms, such as SDN and NFV platform controllers, in order to ensure that its performance is retained and reflected into the target systems. Finally, we will delve in the mechanism used by our architecture to give control feedback to human administrators. References [1] T. Anderson, L. Peterson, S. Shenker, and J. Turner. Overcoming the Internet impasse through virtualization. Computer, 38(4):34–41, 2005. [2] D. Barney. The Network Society. Polity Press Ltd., Cambridge, UK, 2004. [3] J. Barron, M. Crotty, E. Elahi, R. Riggio, D. R. Lopez, and M. P. de Leon. Towards self-adaptive network management for a recursive network architecture. In Proceedings of the 2016 IEEE/IFIP Network Operations and Management Symposium, pages 1143–1148, Washington, DC, USA, 2016. IEEE. [4] David Beckett, Tim Berners-Lee, Eric Prud’hommeaux, and Gavin Carothers. RDF 1.1 Turtle - Terse RDF Triple Language, 2014. http://www.w3.org/TR/turtle/. [5] Rajkumar Buyya, Chee Shin Yeo, and Srikumar Venugopal. Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities. In Proceedings of the 10th IEEE International Conference on High Performance Computing and Communications, pages 5–13, Washington, DC, USA, 2008. IEEE Computer Society. [6] Min Cai, Martin Frank, Baoshi Yan, and Robert MacGregor. A subscribable peer-to-peer rdf repository for distributed metadata management. Web Semantics: Science, Services and Agents on the World Wide Web, 2(2):109–130, 2004. [7] N.M.M.K. Chowdhury and R. Boutaba. Network virtualization: State of the art and research challenges. IEEE Communications Magazine, 47(7):20–26, 2009. [8] Andrew R. Curtis, Jeffrey C. Mogul, Jean Tourrilhes, Praveen Yalagandula, Puneet Sharma, and Sujata Banerjee. Devoflow: Scaling flow management for high-performance networks. SIGCOMM Computer Communications Review,
41(4):254–265, 2011. [9] Espertech Inc. EsperTech - Event Series Intelligence, 2016. http://espertech.com. [10] The Apache Software Foundation. Apache Jena Fuseki, 2016. https://jena.apache.org/documentation/fuseki2/ index.html. [11] Alex Galis et al. Position Paper on Management and Service-aware Networking Architectures (MANA) for Future Internet. http://www.future-internet.eu/fileadmin/ documents/prague_documents/MANA_PositionPaper-Final.pdf, 2010. [12] Jeffrey O. Kephart and David M. Chess. The vision of autonomic computing. IEEE Computer, 36(1):41–50, 2003. [13] Graham Klyne and Jeremy J. Carroll. Resource Description Framework (RDF): Concepts and Abstract Syntax, 2004. http://www.w3.org/TR/rdf-concepts/. [14] Gerd Kortuem, Fahim Kawsar, Vasughi Sundramoorthy, and Daniel Fitton. Smart objects as building blocks for the internet of things. IEEE Internet Computing, 14(1):44–51, 2010. [15] D. R. Lopez. Network functions virtualization: Beyond carrier-grade clouds. In Proceedings of the 2014 Optical Fiber Communications Conference and Exhibition (OFC), pages 1–18, Washington, DC, USA, 2014. IEEE Computer Society. [16] Pedro Martinez-Julia, Diego R. Lopez, and Antonio F. Gomez-Skarmeta. The gembus framework and its autonomic computing services. In Proceedings of the International Symposium on Applications and the Internet Workshops, pages 285–288, Washington, DC, USA, 2010. IEEE Computer Society. [17] Pedro Martinez-Julia and Antonio F. Skarmeta. Using an identity plane for adapting network behavior to user and service requirements. In Mobile Networks and Management, volume 158 of Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, pages 253–265. Springer International Publishing, 2015. [18] Pedro Martinez-Julia, Antonio F. Skarmeta, and Alex Galis. Towards a secure network virtualization architecture for the future internet. In The Future Internet, volume 7858 of Lecture Notes in Computer Science, pages 141–152. Springer Berlin Heidelberg, 2013. [19] Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru Parulkar, Larry Peterson, Jennifer Rexford, Scott Shenker, and Jonathan Turner. Openflow: enabling innovation in campus networks. SIGCOMM Computer Communications Review, 38(2):69–74, 2008. [20] Takaya Miyazawa and Hiroaki Harai. Supervised learning based automatic adaptation of virtualized resource selection policy. In Proceedings of the IEEE Networks 2016, page N/A, Montreal, QC, Canada, September 2016. IEEE Communications Society. [21] Boris Motik, Peter F. Patel-Schneider, and Bijan Persia. OWL 2 Web Ontology Language - Structural Specification and Functional-Style Syntax (Second Edition), 2012. http: //www.w3.org/TR/owl2-syntax/. [22] A. Pras, J. Schonwalder, M. Burgess, O. Festor, G.M. Perez, R. Stadler, and B. Stiller. Key research challenges in network management. IEEE Communications Magazine, 45(10):104– 110, 2007. [23] Eric Prud’hommeaux and Andy Seaborne. SPARQL Query Language for RDF, 2008. http://www.w3.org/TR/ rdf-sparql-query/. [24] Xiaolong Wen, Genqiang Gu, Qingchun Li, Yun Gao, and Xuejie Zhang. Comparison of open-source cloud management platforms: Openstack and opennebula. In Proceedings of the 9th International Conference on Fuzzy Systems and Knowledge Discovery (FSKD), pages 2457–2461, Washington, DC, USA, 2012. IEEE Computer Society.
—6—