A Meta-model for QoS Monitoring in a Dynamic Service-Component ...

4 downloads 17690 Views 666KB Size Report
with dynamic availability, service-based applications should implement a continuous monitoring mechanism. Availability is just an example. In fact, according to ...
A Meta-model for QoS Monitoring in a Dynamic Service-Component Platform F´abio N. Souza, Tarcisio C. Silva, David J. M. Cavalcanti, Nelson S. Rosa and Ricardo M. F. Lima Centre of Informatics Federal University of Pernambuco Pernambuco, Brazil Email: {fns, tcs5, djmc, nsr, rmfl}@cin.ufpe.br Abstract—Quality attributes play a very relevant role in the service-oriented computing world, as they allow distinguishing between functionally equivalent services. In fact, these attributes impact various activities related to the life-cycle of service-based applications (SBAs), starting from service discovery and permeating other activities such as service level agreement establishment and monitoring. Considering their relevance, it is essential that these attributes are precisely defined. Moreover, as quality attributes are inherently dynamic, they must be continuously monitored. In this context, this paper proposes a meta-model that formally defines and connects service, quality and event domains. From one perspective, the connection between service and quality domains enables the design of models that formally specify applications’ functional and non-functional requirements. From another perspective, connections between quality and event domains enable quality engineers to define how relevant quality attributes and metrics can be computed based on a set of runtime events. Conformant models can be interpreted at runtime by compatible platforms allowing them to dynamically (re)configure their monitoring mechanisms.

Dealing with all these aspects in the application code increases complexity, promotes code replication and is not aligned to the separation of concerns principles. In line with these ideas, different projects combine service and componentorientation concepts [1][2][3][4] and propose the use of a service-oriented component model. According to this model, a service-based application is managed by an application container, which is responsible for supporting the application execution, solving its dependencies and managing its lifecycle.

Keywords—Monitoring, Meta-models, Service Oriented Computing, QoS, Event Processing, DSL

Nowadays, containers are usually limited in their monitoring capabilities. Typically, they offer a predefined set of quality attributes which are often insufficient to reflect applications’ needs. Furthermore, the set of metrics by which these attributes can be assessed is also predefined, limiting the expressiveness of the quality model supported by these platforms.

I.

INTRODUCTION

Service-orientation enables the development of flexible and adaptable applications through the use of a dynamic interaction model. According to this model, providers publish their services on a registry, where service-based applications can dynamically find and bind to them. Together, providers, consumers and registry define the traditional “triangle” that is central to the Service-Oriented Architecture (SOA). Despite the flexibility embedded in the model, designing adaptable applications isn’t easy. Firstly, an application should use a registry to find the required services. For each one, multiples candidates can be found, so the application should embed some selection logic. It is also possible that no candidates are found. To handle that, the application can search another registry or wait until a candidate appears. Secondly, an application has no control over the availability of the services that it consumes. In a distributed environment, these services present dynamic availability, meaning that they can become available or unavailable at any time. To deal with dynamic availability, service-based applications should implement a continuous monitoring mechanism. Availability is just an example. In fact, according to the application nature, different quality attributes should be considered.

Ideally, besides resolving application dependencies at deployment time, an application container should be responsible for handling the dynamic aspects inherent to SOA environments. In particular, a container should monitor the services consumed by applications and adapt them dynamically when the quality offered by those services does not fulfill their requirements. In this scenario, important questions are: how can an application precisely define its required quality attributes? How can they be monitored?

In this context, this paper proposes a meta-model (referred to as the Dynamic Service-Oriented Architecture meta-model) that defines and connects service, quality and, event domains. These elements are the core elements that compose the abstract syntax of a modeling language used by application developers to specify provided and required services, as well as the expected quality-related characteristics. The idea is to propose the use of this language to design platform-independent application models, which are interpreted at runtime (aka models @ runtime) in order to dynamically configure an application container and its monitoring mechanism. It is important to remark that the relevance of the proposed meta-model is not the representation of those individual domains. In fact, if they are analyzed separately, they are reasonably similar to other proposals. Particularly, what is significant in our proposal is the fact that our meta-model explicitly represents the connections between those domains, enabling application developers to specify provided and required services, expected quality attributes and metrics, and, specially, how these metrics can be precisely defined and computed based on a set of events thrown during applica-

tion execution. To validate these ideas we developed DSOA Platform, a reference implementation which dynamically interprets conforming models and (re)configures its monitoring mechanism at runtime. This paper is organized as follows. Section II presents DSOA meta-model, which connects service, quality, and event domains. Section III describes the DSOA platform’s monitoring mechanism, which is evaluated in Section IV. Section V compares our research with other relevant research projects on the field. Finally, we conclude the paper and present some future research directions in Section VI. II.

DSOA META-MODEL

Model-driven development is an important trend in software engineering area as it helps developers to focus on business concerns instead of on technological related aspects. The core idea is to center the development tasks on the production of a set of technology (platform)-independent models that represent different concerns of the software under construction. At later stages of the software development process, these models are usually involved in transformation processes that aim at generating other models or even application code. Therefore, this approach leads to a natural separation of concerns, isolating the business-related code from other orthogonal aspects, such as security and transaction. In consonance with these ideas, we propose that developers design applications through a collection of high-level platform independent models, which are dynamically transformed by the SOA infrastructure into low-level models which are used to specify required and provided services, quality attributes, as well as runtime events and processing agents that are used to compute quality related metrics. To populate those models, we use external Domain Specific Languages (DSLs) whose abstract syntaxes are formally defined by our meta-model. In fact, the core concepts represented in this meta-model can be grouped into a set of domains areas, each one representing a different perspective. These perspectives and their relationships are detailed in this section. A. DSOA Metamodel: A Multidimensional Perspective To design the meta-model, we must first identify the domains related to the design of QoS-aware service-based applications. These domains should allow us to specify which services should be monitored, which quality attributes are relevant, and how different quality metrics can be computed, without adding any platform specific code. In fact, several researches in the QoS-aware service composition area propose answers to those questions. However, they usually consider just the domains of services and quality singly. Despite the relevance of these domains and of the related research projects, just modeling services and quality attributes is not enough since it does not establish precisely how the quality attributes and corresponding metrics can be computed. As shown in Figure 1, an objective answer to this question can be defined by the introduction of the event domain and its correlation with service and quality domains. To effectively understand the basic idea behind that figure, it is necessary to consider it under different perspectives. Each

Fig. 1: DSOA Metamodel: A Multidimensional Perspective

perspective focuses on a different aspect and tries to provide answers to some of the questions that are relevant on the context of managing QoS-aware service-based applications. The first perspective focuses on the plane connecting service and quality domains. This plane is used to represent the quality attributes required and/or provided by the corresponding services. In this context, when a service provider develops a model highlighting points in this plane, he is indicating which quality attributes are relevant to the service that he is providing. Correspondingly, an application developer is supposed to produce a model describing the services that the application requires and quality attributes expected from those services. Besides indicating those associations, our models bind them to values in order to compose quality documents usually referred to as Service Quality Descriptions (SQDs) [5]. An essential point here is to observe that there is no indications concerning how those attributes shall be measured or computed. In other words, just connecting services and quality attributes is not enough. In fact, this connection does not provide any information to the supporting platform concerning how it can monitor those attributes. To fulfill this gap, we propose the introduction of the event domain. The point here is that the metrics related to the quality attributes shall be computed based on the set of events generated by the platform’s sensors when they detect interactions with the services that are monitored. The introduction of the event axis defines two new planes that are relevant to the design and execution support of QoSaware applications. The first plane is defined by the service and event axis and indicates which events are generated by the supported platform when it detects interactions with its supported services. Moreover, as will be shown in the section that describes the event domain, that allows the definition of event processing agents which generate derived events that are also associated to these services. The other plane connects the event and quality axis. This plane represents the QoS event mappers which indicate the events that are used by the platform to compute the value of

a quality metric. In this context, we say that mappers assign semantics to the events that are thrown at runtime. In summary, the main contribution of the DSOA metamodel is to make a clear connection between service, quality and event domains, which represent the core concepts involved in the definition of QoS-aware Service-Based Applications. In particular, from the monitoring mechanism’s perspective, the proposed meta-model should enable developers to define conformant models that provide answers to three basic questions: which services should be monitored, which quality attributes are relevant, and how different quality metrics can be computed. The remainder of this section details each perspective of the meta-model highlighting the key elements. B. Service Domain The first part of the DSOA meta-model (shown in Figure 2) represents the service domain. It enables developers to specify the services that are provided and/or required by an application. According to this perspective, each RequiredService or ProvidedService has a ServiceSpecification that comprises a FunctionalSpecification (defining the capabilities that are offered or required), and a NonFunctionalSpecification (defining the corresponding quality level by connecting service and quality domains, and establishing constraints on the quality metrics). From a monitoring perspective, these specifications determine which services and operations should be monitored, and which metrics are relevant. To make a ProvidedService accessible to other applications, it should be published through one or more Endpoints, which can eventually be connected to RequiredServices through the definition of Bindings. C. Quality Domain Another relevant view of the meta-model describes the domain of quality attributes and metrics (Figure 3). As it can be observed, this perspective of the meta-model is quite generic and allows the definition of new quality attributes (on the base level) simply by creating new instances of the QoSAttribute class (on the meta level). Quality attributes are hierarchically organized via the concept of QoSCategory, which acts as a grouping element. Metric is another relevant concept depicted on the metamodel. A metric represents a distinct point of view through which an attribute can be evaluated. Every metric has a distinct metric property that represents its value. Besides that, a metric can also have a collection of metadata properties describing the circumstances under which that metric was obtained. It is also important to observe that the quality domain itself does not describe how the metrics are computed. In fact, the metric “computation algorithm” is defined in another perspective. This dissociation enables us to dynamically replace the metric computation algorithm without interrupting running applications. D. Event Domain The last fragment of our meta-model is presented in Figure 4 and comprises the concepts related to event processing. These concepts are essential in order to define the different metric computation algorithms. On this perspective, a central

Fig. 3: DSOA Quality Domain

element is the Event class, which is designed to be able to represent any event happening at runtime. The introduction of this class at the meta level enables DSOA to support a dynamically extensible collection of event types. In fact, a new event type (on the base level) is represented by a new instance of the Event class (on the meta level). This approach allows eliminating the necessity of defining new classes to represent different types of event on the platform, increasing its flexibility. Each Event has two collections of EventProperties, referred to as data and meta-data. The data collection comprises the properties that describe the event itself. Since these properties should give detailed information concerning what effectively occurred, the set of properties defined here varies largely according to the event type under consideration. The meta-data collection, on the other hand, is used to describe the circumstances under which the event occurred, usually including, for example, an identification number and a timestamp. The event domain also specifies a single inheritance model defining that an event type can have an associated super type. This inheritance model, besides simplifying the event definition by eliminating eventual redundancies on the properties definition, enables designers to explicitly represent event generalization and specialization relationships [6]. These relationships, on their hand, enable consumers to receive more than one type of event through a unique subscription, once that these types have one super type in common. Another fundamental element that is depicted on the event domain is the EventProcessingAgent. Conceptually, these agents represent event transformations and are defined by an input terminal, an output terminal, and a series of DerivationExpressions. From the agent’s point of view, an input terminal defines a stream of incoming events to which it can attach a collection of user defined filters, used to exclude undesired events. The events received through the input terminal can be used to derive new events. The derived events are forwarded through the output terminal, composing a stream of derived events, whose properties are computed by applying the agent’s DerivationExpressions. Since these expressions define the agent processing logic, they represent the core of the agent definition.

Fig. 2: DSOA Service Domain

The agents compute quality metrics and generate corresponding events, which are propagated through their output terminal. The relationship between events and metrics is represented on the DSOA meta-model by MetricEventMapper and MetricEventPropertyMapper classes, which act as mapping elements adding semantics to the events. Finally, our event domain shows the EventProducers, which should advertise the types of events that they publish; and the EventConsumers, which represent elements interested in receiving particular events. This interest is manifested through the definition of a Subscription specifying an event type and a collection of Filters. III.

M ONITORING IN THE DSOA P LATFORM

In the previous session, we motivate the platform metamodel that defines and connects service, quality and event domains. The connection between these domains at the meta level enables application developers to define models, which provide answers to three fundamental monitoring questions: (1) which services should be monitored, (2) which quality attributes and metrics should be observed, and (3) how these metrics can be computed. This section complements the previous one focusing on the description of our platform reference implementation from two complementary perspectives, structural and behavioral, which are jointly presented in Figure 5. From the structural perspective, the figure shows the platform’s monitoring-related components and the relationships between them. These components are usually responsible for producing, processing and consuming events, which represent relevant happenings on the application execution environment. As we will see while presenting the behavioral perspective, these events are used to compute quality-related metrics. Finally, it is important to mention that only the components that are directly involved on monitoring activities are represented

on the figure, so other important components such as our quality-aware service registry are not represented here. The behavioral perspective comprises three flows. The first one represents the platform configuration. The second one shows the data gathering process that monitors interactions between applications running on the DSOA platform and external services. The last flow represents the data processing, which computes quality metrics. The sequence of this section presents more detail concerning these essential flows. A. (Re)Configuration Flow On the DSOA platform, the design of an SBA is based on the specification of a collection of models that are in conformance with the meta-model presented before. To specify those models, a developer uses a set of DSLs that are interpreted by the platform. In this scenario, the design of an SBA starts with the specification of a service model describing its functional and non-functional requirements. From the monitoring perspective, this model determines the services and operations that should be monitored and the metrics that should be computed. From the quality perspective, QoS experts design quality models specifying the quality attributes and metrics that will be used by the application developers during the specification of the applications’ non-functional requirements. To compute metric values it is necessary to process a collection of events produced during the platform operation. To specify which events are relevant and how they should be processed, each application can define its own event model. Through this model, applications specify their own event processing networks1 [7][6], which are responsible for computing the quality metrics. 1 The description of the main elements of an event processing system, including event types, event processing agents, event producers, and event consumers

Fig. 4: DSOA Event Domain

our DSLs) which are read by the platform to configure its internal elements. In fact, as Figure 5 illustrates, these models are processed by the Platform Configuration Service, which translates the DSL format into a semantic representation of the service, quality and event models. The service model is sent to the Application Containers in order to instruct their sensors to monitor specific services and operations. This degree of configuration enables us to minimize data collection and processing requirements. The Platform Configuration Service uses the quality model to configure the Metric Computing Service. Basically, this configuration step is responsible for defining new quality attributes and metrics, and incorporating these definitions on the platform catalogs. Finally, the event model is used to define new event types and agents on the Event Processing Service in order to customize the processing logic.

Fig. 5: DSOA Platform High Level View

In this context, to indicate how the metrics can be computed, quality models must be connected to event models. This connection is realized through the application of the Mapper pattern [8], letting quality and event models to remain independent from one another. In summary, the configuration of a quality-aware SBA involves the design of a collection of models (written using

Non-functional requirements are often dynamic; that is, they can change at any time, either by demands of changes in SLAs, regulatory authorities or customers’ internal policies. To couple with this degree of dynamism, the Platform Configuration Service supports reading and interpreting services, quality and events models at runtime. This service is usually started when the platform is initialized and, from that moment on, it listens for configuration patches2 .

2 Definitions of events, agents or quality models that are used to configure at runtime the platform’s components.

B. Data Gathering Flow The data gathering flow is centered around the Interceptor pattern. As shown in Figure 5, every access to/from an SBA is intercepted by the Application Containers, which mediate the interactions between the SBAs and their external world. Besides that, a container manages SBAs’ life-cycle, being responsible for discovering, selecting and binding required services; publishing provided services; and monitoring applications in order to ensure that the services that they provide and require are in accordance with the established SLAs.

reconfiguration activities. In fact, by using DSOA’s DSLs a developer can dynamically add new event types and agents in order to extend the platform capabilities. In particular, our platform was developed using Esper [9] and our agents are used to generate corresponding Event Processing Language (EPL) statements. As the platform and the CEP engine define their own event models, a transformation is again necessary. In fact, the transformation process is not restricted to event types. Subscriptions and agent definitions must also be translated according to the chosen engine.

Usually, application containers have a collection of sensors responsible for generating events to represent relevant happenings occurring at runtime. On DSOA Platform, a particularly relevant event type is the InvocationEvent, which notifies other platform components about interactions between managed SBAs and remote services. An InvocationEvent is event type that is part of the platform embedded event model (i.e. it is a primitive event) and is realized as an instance of the event class defined at the event meta-model. It contains a set of data and metadata describing the service invocation including, for example, request and response timestamps, invocation parameters, service consumer identification, and so forth. Every InvocationEvent is published to the EventDistributionService in a separated thread in order to minimize the impact of the monitoring mechanism on the request processing cycle.

The final steps that compose the monitoring data processing involve the Metric Computing Service. This component subscribes itself on the EDS in order to receive the events that propagate the computed values. When it receives one of these events (step 4) it verifies on its metric catalog whether the event type corresponds to a valid metric. If this is the case, it assigns metric semantics to the event effectively transforming it in metric data. That data is propagated to the other components through the EDS (step 5). Finally, the last step of the processing flow corresponds to the delivery of the metric data to metric handlers (step 6). In particular, Figure 5 presents the MonitoringCentral, which is the platform component that centralizes access to the monitored data.

C. Data Processing Flow

Our experimentation scenario involves a Homebroker application that uses an Information Provider service to obtain stock quotes. This scenario was chosen because stock quotes are critical information and they can vary quickly in a dynamic market. In this context, if an Information Provider takes a long time to deliver the required stock quotes, the returned values can not represent the current prices. Therefore, a Homebroker must monitor the Information Providers that it uses and ideally even be able to adapt itself in order to replace these Providers when they are not capable of providing the information with the required quality level. As we can see in this scenario, it is important that the Homebroker developer can tell to the application container that he considers response time an important quality attribute. Moreover, he should be able to define the metrics that are relevant (e.g. average response time) and how these metrics can be computed based on the events that happen during application execution.

As shown in Figure 5, the monitoring data processing flow starts when the application container publishes InvocationEvents on the platform (step 1) in order to allow other components to receive and process them. On DSOA platform, the component responsible for event distribution is referred to as Event Distribution Service (EDS). The EDS receives incoming events through its Pub/Sub API acting as an event bus and is responsible for keeping a loosely coupled connection between event producers and consumers. The events detected on the platform and published on the EDS are transformed in messages, which reify the event notifications. In fact, the event distribution service is built atop of a message bus, which uses a collection of channels for routing and delivering the event notifications to the corresponding subscribers. To enable the integration with the messaging mechanism, our platform performs a transformation, translating DSOA platform’s event model into the model embedded on the messaging technology. To be able to receive and process the events that flow on the platform (step 2), the Event Processing Service (EPS) subscribes itself on the EDS. The EPS is a central component of our monitoring infrastructure. In fact, inside the EPS, received events are processed by a collection of user defined processing agents, which can produce new event types (derived events) that are published back to the EDS (step 3). In the context of quality-aware SBA development, these agents are used to compute quality metrics. The EPS is built atop of a Complex Event Processing (CEP) engine and, besides that element, it comprises two important components: the events catalog and the agents catalog. These catalogs maintain the definitions of the event types and agents that are known by the DSOA platform and are directly involved on the platform

IV.

E XPERIMENTS AND R ESULTS

In order to impose some control over the proposed scenario, we developed a simulator that can be configured to represent services that provide different QoS characteristics. In particular, in our scenario, we configured a simulated Information Provider so that it can present a cyclical behavior. During the first minute of operation it presents an exponentially distributed response time with mean value of 500ms. Thereafter, it becomes unavailable for about 10s. Finally, the simulated service will throw some business exceptions, but normalizes itself quickly and starts a new cycle. To validate our meta-model and the DSOA Platform, we designed two experiments. The first one addresses the relevance of our contribution and shows that being able to define how the quality metrics are computed represents an important aspect concerning QoS-aware application development. In particular, in our controlled experiment, we change the window size that is used on the average response time computation.

(a) Exp. 1: Window’s Impact

(b) Exp. 2: Memory Usage

(c) Exp. 2: System Load

Fig. 6: Platform Evaluation.

The second one evaluates our reference implementation in terms of performance aspects and aims to show that DSOA platform is able to support dynamic QoS- metric definitions and corresponding computation algorithms without imposing severe performance penalties or a large resource consumption. In other words, we intend to quantify the performance impact introduced by the DSOA platform’s monitoring mechanism on the request processing cycle. To that end, we compare the resource consumption observed while the Homebroker application executes on DSOA platform running 1000 event processing agents concurrently, with the same application implemented atop of the iPojo [1] service composition platform. In both experiments, we use a load runner to simulate 100 simultaneous clients, each one sends a request and waits a random time between 0 and 1 second before sending the next one (thinking time). Figure 6a presents the results of the first experiment. It demonstrates the impact of the variation of the window size on the average response time computation. As expected, large windows minimize the impact of outliers and promote more smooth results. On the other hand, they difficult the identification of quick variations. As shown, this single parameter that can be defined and used differently by different applications (and even varied at runtime) has a significant impact in terms of service quality as perceived by these applications. Considering the relevance of this parameter on the computed values, the proposition of a solution that eases this kind of configuration at runtime is an important differential.

The results of the second experiment are represented on Figures 6b and 6c. They give us an idea about the scalability of the DSOA platform monitoring solution. More specifically, they compare memory consumption and the system load average while the application is running on a DSOA platform (with its metric computation agents processing lots of simultaneous events) with a similar application running on an iPojo platform (without metric computation capabilities). As we can see in the charts, even with 1000 concurrent processing agents running, the platform does not impose a high resource consumption when compared with an iPojo environment. V.

RELATED WORK

An important feature of quality attributes is their dynamic nature. This peculiar characteristic prevents system validation through the traditional approach of testing prior to execution. In fact, in applications with strong non-functional requirements, a continuous monitoring solution is necessary in order to try to ensure that the non-functional requirements are met. In this context, there are two main areas related to our research: quality of service and monitoring solutions. In the quality arena, Kritikos et. al. [5] present an important survey comparing various models used to describe QoSrelated aspects. In particular, they present some criteria used to compare different Service Quality Models and Meta-models. In this context, our research project present various important mentioned characteristics such as extensibility, well-defined formalism, and expressiveness. Besides that, our research puts

an emphasis on dynamism, allowing the (re)definition of quality attributes and metrics at runtime. Finally, we present a real monitoring mechanism embedded on our platform in order to support the defined models and meta-models. From the monitoring perspective, several industrial and research projects proposed different monitoring solutions, each one with its strengths and weaknesses. Due to space limitation, in the sequence of this section we present a small, but relevant collection of related research projects. Baresi and Guinea [10] propose an approach to monitor complex service-based systems. Their proposal has two contributions: Firstly, an extensible declarative language (mlCCL) that allows specifying how to collect, aggregate, and analyze runtime data. It also allows defining various data they want to collect from the layers in their system, how to aggregate them to build higher-level knowledge and how to analyze them to discover undesired behaviors. Secondly, a middleware for event correlation and aggregation that supports mlCCL specifications, and provides advanced data aggregation and analysis features. Additionally, it can be used to probe systems that are based on the Service Component Architecture (SCA) and deployed to virtual resources. The primary difference between this research and our proposal is our focus on dynamism. In this context, through meta-models is possible to specifying at runtime which services should be monitored, which quality attributes should be considered, and how different quality metrics can be computed, whereas they proposed a solution embedding a predefined set of metrics and quality attributes, as well as a fixed way of computing metrics.

provides the required flexibility once that application developers can design models, which specify the quality attributes and metrics that are relevant in the application context, as well, as how these metrics are computed in terms of the events that occur during application execution. These models can be interpreted at runtime in order to (re)configure service based applications and its supporting platforms dynamically. Despite the relevance of the proposed meta-model, there are still some relevant aspects that should be addressed. In our future work, we intend to enlarge our meta-model to include aspects related to dynamic adaptation, eventually leading to the proposition of an adaptive QoS-aware service based platform. An orthogonal point to be addressed would be the implementation of graphical tools that would aid developers to create models based on our DSLs. Such tools would help to make preliminary validation, representing a step in the direction of the development of models that are “correct by construction”. R EFERENCES [1]

[2]

[3] [4]

[5]

Moser et. al. [11] propose an event-based approach to monitoring service composition infrastructures using complex event processing techniques to avoid fragmentation on monitored data. They argue that it is necessary to enable the identification of anomalies in an agnostic, unobtrusive and multi-process way. In contrast to our work, they don’t use meta-models to defining quality attributes and events. Also, they don’t isolate the CEP engine, and specify the event related queries directly through EPL statements.

[6]

[7]

[8]

VI.

CONCLUSION AND FUTURE DIRECTIONS

The specification of quality attributes and metrics is fundamental for developing QoS-aware service compositions. Despite this relevance, there is no consensus on the set of QoS attributes that are relevant, nor on the approaches used to monitor them and to compute corresponding metrics. An ideal monitoring system should support rich quality models, comprising an extensible collection of quality attributes and enabling a wide analysis of these attributes through a collection of user defined metrics. Although different research and industrial solutions have been proposed, they usually lack flexibility concerning the required support to definition of new quality attributes and metrics. Moreover, they usually don’t allow on-the-fly reconfiguration of their QoS data gathering and processing components. In this context, we introduce DSOA meta-model which is a platform independent model that defines and connects service, quality and event domains in order to support the specification of QoS-aware service based applications. This meta-model

[9] [10] [11]

C. Escoffier, R. S. Hall, and P. Lalanda, “iPOJO: an Extensible ServiceOriented Component Framework,” in IEEE SCC. IEEE Computer Society, 2007, pp. 474–481. H. H. O. C. L. S. A. P. B. Adrian M Colyer (SpringSource), “Spring Dynamic Modules Reference Guide.” [Online]. Available: http://docs.spring.io/osgi/docs/1.2.1/reference/html OASIS, “Service Component Architecture (SCA).” [Online]. Available: http://oasis-opencsa.org/sca R. S. Hall and H. Cervantes, “Gravity: supporting dynamically available services in client-side applications.” in ESEC / SIGSOFT FSE. ACM, 2003, pp. 379–382. K. Kritikos, B. Pernici, P. Plebani, C. Cappiello, M. Comuzzi, S. Benbernou, I. Brandic, A. Kert´esz, M. Parkin, and M. Carro, “A survey on service quality description.” ACM Comput. Surv., vol. 46, no. 1, p. 1, 2013. O. Etzion and P. Niblett, Event Processing in Action. Manning Publications Company, 2010. [Online]. Available: http://www.manning. com/etzion M. Edwards, O. Etzion, M. Ibrahim, S. Iyer, H. Lalanne, M. Monze, C. Moxey, M. Peters, Y. Rabinovich, and G. Sharon, “A Conceptual Model for Event Processing Systems,” 2010. [Online]. Available: http://www.ibm.com/developerworks/library/ws-eventprocessing M. Fowler, Patterns of Enterprise Application Architecture. AddisonWesley Longman, Amsterdam, 2002. EsperTech Inc, “EsperTech - Event Series Intelligence.” [Online]. Available: http://www.espertech.com L. Baresi and S. Guinea, “Event-Based Multi-level Service Monitoring.” in ICWS. IEEE, 2013, pp. 83–90. O. Moser, F. Rosenberg, and S. Dustdar, “Event Driven Monitoring for Service Composition Infrastructures.” in WISE, ser. Lecture Notes in Computer Science, L. C. 0002, P. Triantafillou, and T. Suel, Eds., vol. 6488. Springer, 2010, pp. 38–51.

Suggest Documents