A Framework for Performance Monitoring, Modelling and Prediction of ...

1 downloads 4055 Views 139KB Size Report
The framework consists of a monitoring module, a modelling module and a ... Existing Component-Oriented frameworks such as Enterprise ... application server.
A Framework for Performance Monitoring, Modelling and Prediction of Component Oriented Distributed Systems Adrian Mos*

John Murphy*

Performance Engineering Laboratory Dublin City University, Glasnevin, Dublin 9, Ireland

Performance Engineering Laboratory Dublin City University, Glasnevin, Dublin 9, Ireland

[email protected]

[email protected]

ABSTRACT We present a framework that can be used to identify performance issues in component-oriented distributed systems. The framework consists of a monitoring module, a modelling module and a prediction module, that are interrelated. The monitoring block extracts real-time performance data from a live or under development system. The modelling block generates UML models of the system showing where the performance problems are located and drives the monitoring process. The performance prediction block simulates different system-loads on the generated models and pinpoints possible performance issues. The technological focus is currently on Enterprise Java Beans systems.

refined by the modelling process in order to increase the efficiency and accuracy of monitoring/modelling. The performance prediction module will be used mostly in an off-line mode of operation. It will process saved models from the modelling module to simulate different system loads as specified by the user via a control console. The COMPAS Performance Predictor will show the impact of different usage scenarios as well as the impact of change on the design of the system. The development/integration teams will see how the performance and functionality of the system would be affected when such changes occur.

1. INTRODUCTION Existing Component-Oriented frameworks such as Enterprise Java Beans (EJB) help developers build distributed complex systems faster then ever before. To meet time-to-market requirements developers often use Commercial-Off-The-Shelf (COTS) Components and integrate them into their systems. Complex interactions in which COTS components take part and the complexity of the frameworks themselves can lead to performance issues which are both hard to find and hard to predict. We present a methodology that helps developers and system integrators understand and potentially correct the performance issues of a component-oriented distributed system at a component level. Using this methodology, they will also be able to predict the behaviour of their system when different user loads are applied. The Component Performance Assurance Solutions (COMPAS) framework consists of three modules, presented in Figure 1. The monitoring module provides the monitoring/management infrastructure required to perform the actual monitoring, and the control and data processing/storage infrastructure. The information gathered during the monitoring process is needed by the modelling module that is responsible for generating dynamic UML models of the system. The models are enhanced with realtime performance indices obtained from the monitoring module and corresponding to the concepts and notations defined by [3]. There is a logical feedback loop connecting the monitoring and modelling modules, which allows the monitoring process to be

COMPAS differs from other performance prediction approaches such as [4] by emphasizing the importance of monitoring and the automatic generation of models augmented with “real” performance data. We believe that systems built using middleware such as EJB posses an inherent complexity, making it difficult for developers to make performance assumptions, as required by many alternative approaches [4]. An EJB application server handles services such as transactions, security, caching, replication and pooling of components, making it almost impossible for developers to create models where they assign methods to CPUs or to processes [4].

2. MONITORING The goal of the monitoring module is to capture run-time dynamic data such as method invocation events for each component in the target distributed system. Our approach is non-intrusive in that it does not require any changes to the original application or the application server. In order to capture dynamic performance indicators for the monitored components (EJBs), a suite of proxy components are inserted in the application server, composing a parallel application. This parallel application has one proxy component mirroring each component in the original application. Every proxy component will assume the identity of its related original component and will expose the same operational interface as its target component. Thus, all the external and internal clients of the original application will use the newly deployed proxy application instead. The proxy components are aware of every component-level event that would normally take place in the original components, such as method invocations or

component activation/passivation. The role of a proxy component is to capture all these events, notify the monitoring subsystem about them and forward the original request further to its intended recipient (the original component) [5]. We have implemented a proof-of-concept monitoring module that can track method invocations and measure their execution times. It displays graphical real-time charts showing the evolution of any method’s execution time in any EJB component. [5]. The current implementation of the monitoring modules uses the Java Management Extensions (JMX) Framework, which eases the management operations for the proxy components. The proxy components have a management interface that can be used to control them and selectively turn monitoring on or off for particular components/methods/events.

3. MODELLING The modelling module is responsible for generating and processing UML models of the target application. We are complying with the newly adopted Model Driven Architecture (MDA) [1] methodology for describing the functional and performance characteristics of the target distributed application. MDA introduces two important concepts, the Platform Independent Model (PIM) and the Platform Specific Model (PSM). A PIM would generally be used in the earlier stages of development and it consists of a detailed UML model of the business logic without any technological details. Therefore, a PIM is platform independent, as it does not contain any platform specific information, such as EJB Home Objects. Note however that a platform can be anything from a hardware platform, to operating system to middleware to another PIM. So the notion of platform and platform independence are relative, which makes it possible to have an arbitrary number of PIMs for the same problem space, each representing a different level of abstraction. A PSM has platform specific information in the model, such as EJB or CORBA stubs. Again, taking into consideration the relative aspect of a platform, a PSM can be just a more detailed description of a PIM, with more technical details. The main advantage of using the MDA approach is the possibility of enabling a zooming feature in the COMPAS framework. Different UML models will be generated for the same transaction in the observed system. Some generated PIMs can use the EDOC profile for UML [2] to describe the system in a more easy to understand, functional manner, and PSMs can be used to show the platform specific information (EJB). The models will have associated performance data (e.g. execution times for methods, pool size for components), displayed in a visually effective way to the user, helping to intuitively identify performance/scalability problems. Having generated the PIMs with their transactions (sequences of interacting components), the Modeller can advise the Monitor of components that need to be closely monitored. Without such a feedback, the monitoring system would monitor either everything in the system or just those components and events manually selected by a human user. By having the transactions clearly laid out, it is possible to infer which components in a given transaction are actually responsible for a scalability problem and focus the monitoring process on them, therefore reducing the monitoring overhead on the system.

4. PERFORMANCE PREDICTION By being able to predict the performance behaviour of an application at development time as well as after deployment, developers could design and code better from the early stages of the project. The COMPAS performance prediction module will use visualization and simulations techniques to make predictions based on different user-loads that can be fed into the module. The prediction module will generate runnable versions of models generated by the modelling subsystem and simulate them in different usage scenarios. The simulations can be performed at different levels of abstraction from top-level black boxes to the lowest level PSMs. This feature facilitates the comparison between how a model would run if implemented in EJB versus CORBA for instance. Developers could first simulate a very high level model of the system and then browse further and only simulate in detail those subsystems that presented problems at the previous simulation. In this way, users could understand the impact of a change in either the business realities or the application design. The performance influence of a particular distribution will be illustrated on UML/EDOC based diagrams that will clearly show the entities (methods, components) that are affected.

5. CONCLUSIONS We propose a methodology that can help developers understand performance problems in an existing or under development component oriented system. The framework consists of a black box type monitoring module, a modelling module that generates UML models of the target application, and a performance prediction module that can help in understanding the impact of different usage scenarios on the application as well as the impact of any design changes. A proof-of-concept monitoring system that has basic graphical consoles showing variations in performance parameters of the monitored entities has been implemented. Development of the modeller and the performance prediction module is under way.

6. REFERENCES [1] Object Management Group, Model Driven Architecture, OMG document number ormsc/2001-07-01, OMG, 2001

[2] Object Management Group, UML Profile for Enterprise Distributed Object Computing Specification, document number ptc/02-02-05, OMG, 2002

OMG

[3] Object Management Group, UML Profile for Schedulability, Performance, and Time Specification, OMG document number ptc/02-03-02, OMG, 2002

[4] L.G. Williams, C.U. Smith, “Performance Engineering Evaluation of Software Architectures”, Proc. First International Workshop on Software and Performance (WOSP’98), Santa Fe, NM, USA, October 1998

[5] A. Mos, J. Murphy, “Performance Monitoring of Java Component-Oriented Distributed Applications”, Proc. IEEE 9th International Conference on Software, Telecommunications and Computer Networks (SoftCOM), Croatia/Italy, October 2001 *

The authors’ work is funded by Enterprise Ireland Informatics Research Initiative 2001, and supported by IONA and Sun Microsystems Ireland

Suggest Documents