transactional monitors (OTMs) are new and in most cases still not widely ..... Dell dual. 400Mhz. Pentium. 0.5 GB Ram. 10 Mbit ethernet. ODBC/XA. MS SQL.
Evaluating Object Transactional Monitors with OrbixOTM Ian Gorton* **, Anna Liu* *Advanced Distributed Software Architectures and Technologies (ADSaT), CSIRO Mathematical and Information Sciences, , Locked bag 17, North Ryde NSW 1670, Australia email: {Ian.Gorton, Anna.Liu}@cmis.csiro.au
Abstract As more and more object-oriented transactional processing monitors are being developed, users in industries such as banking and telecommunications need systematic and critical evaluations of the strengths and weaknesses of these products. This paper presents the Middleware Evaluation Project (MEP) which aims to provide an impartial evaluation based on rigorously derived tests and benchmarks. The evaluation framework based on TPC’s benchmark C will firstly be presented followed by discussions on the set of evaluation criteria. Preliminary results on the OTM product OrbixOTM will also be given.
1. Introduction Traditionally, transactional processing monitors exist in the realm of mainframe computers running CICS, IMS and other monolithic Transaction Processing (TP) monitors. In the last decade distributed TP monitors have demonstrated their suitability for building transactionbased system using a 3-tier client-server architecture [1]. Several distributed TP monitors such as Tuxedo [2][3], Encina [4], and Top-End [5] are widely used, providing reliable transactional processing, and hiding the low-level networking details from programmers. Recently, distributed object-oriented systems are being constructed. Systems constructed in an objectoriented manner exhibit the following features: modularity, reusability, and extensibility. The emergence of de-facto standards such as the Common Object Request Broker Architecture (CORBA) further promotes the construction of distributed object systems. As a result, TP monitors with object-oriented support are an important new requirement for industries that
**Adjunct Associate Professor School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
require large-scale transactional processing systems, such as banking, financial services and telecommunications. Several object-oriented transactional processing monitors have been developed. These object-oriented transactional monitors (OTMs) are new and in most cases still not widely deployed. These Object-Oriented Transaction Monitors (OTMs) include: CORBA-based • OrbixOTM from Iona Technologies • ITS from Inprise • M3 from BEA DCOM-based • MTS from Microsoft DCE-based • Encina++ Proprietary technologies • Forte Systematic evaluation of the various object-oriented transaction monitors is needed to understand the strengths and weaknesses of each technology and the appropriate contexts and tasks where each product is most suitably applied. This understanding and knowledge will have important consequences on the use of existing OTMs and the engineering of scalable and reliable transactional object systems. The Middleware Evaluation Project (MEP) [6] is one such attempt. The MEP project aims to systematically evaluate the strength and weaknesses of existing OTM products by: • implementing a realistically sized example application using a number of OTMs, • using the same platforms, compilers, databases to reduce variables in the evaluation environment, • comparing and contrasting results in terms of performance, scalability, code/effort for features, etc. The results of this evaluation project will enable organisations to select OTMs based upon rigorously
derived evidence, and hence deploying the most suitable product for applications with different requirements. This paper firstly outlines the evaluation framework, which is based on the TPC-C benchmark application [7]. Then, the various evaluation criteria will be presented and discussed. Finally, some preliminary results obtained for the OTM product OrbixOTM [8][9] from Iona Technologies will be presented.
Five transactions have also been defined, as given in table 1.
Transac tion NewOrder
Functionality entering a complete order
2. Evaluation framework The MEP project implements the TPC-C application published by the Transaction Processing Performance Council (TPC). The Transaction Processing Performance Council consists of members who are computer system and software database vendors, market research firms, system integrators, and end-user organisations. TPC-C provides a standard benchmark for TP monitor systems.
2.1 TPC-C application The TPC-C benchmark comprises a set of basic operations designed to exercise system functionality in a manner representative of complex On-Line Transaction Processing (OLTP) application environments. These operations portray the activities of an organisation that manages, sells, and distributes a product or a service. The workload is centered on the activity of processing orders and several maintenance functions. The company portrayed by the benchmark is a wholesale supplier with a number of geographically distributed sales districts and associated warehouses. As the company’s business expands, new warehouses are created. Each regional warehouse covers 10 districts. Each district serves 3,000 customers. All warehouses maintain stocks for the 100,000 items sold by the company. To represent this business information system, there are nine table schemas in the database system. The tables and their corresponding cardinalities are illustrated in figure 1. 10
Warehouse W
History W*30k+
100k Stock W*100k
New-Order W*9k+
District W*10 3k
1+
Customer W*30k 0-1
1+
3+
W Item 100k
Order-Line W*300k+
5-15
Order W*30k+
Figure 1. Database Schema of the TPC-C Application
Payment
OrderStatus
updates customer’s balance and reflects the payment on the district and warehouse sales statistics queries status of a customer’s last order
Delivery
processing a batch of not yet delivered order lines. Note: Each orderline is processed
StockLevel
determines the number of recently sold items that have a stock level below a specified threshold
Database access type
Performance Requirement
mid-weight, read-write transaction with high frequency of execution light-weight, read-write transaction, high frequency of execution
Stringent response time requirement to satisfy on-line users Stringent response time requirement to satisfy on-line users
mid-weight read-only, low frequency of execution The business transaction (comprised of a batch of database transactions) has a low frequency of execution Heavy readonly database transaction, low frequency of execution
Response time requirement to satisfy on-line users must complete within a relaxed response time requirement.
relaxed response time requirement, relaxed consistency requirements
Table 1. TPC-C Transactions For the purposes of the MEP project, we have chosen the TPC-C benchmark application as a guideline example application only. We have also made major extensions. These include interfaces to message queuing systems and additional DBMSs and transactions. Further, we have devised our own evaluation criteria, which have been carefully designed to more appropriately stress test objectoriented transaction monitors.
3. Evaluation criteria What are the requirements for building a reliable, scalable, enterprise-wide, component-based application? What product features and build support are desirable for minimizing development risks and cost? Below is a set of issues that will be addressed in the MEP project, in attempt
to answer the most relevant questions potential OTM users have.
-
3.1 End product attributes • -
-
-
-
• -
• -
• -
Scalability Does the OTM product support multi-threaded applications? Is load balancing automatic, or configurable? Is there a set of sophisticated connection management facilities that distributes client load, or funnels client communication, or routes invocations to distributed objects intelligently? What about dynamic object creation and binding? Scalable event handling? Fault tolerance? Is there an efficient database connection mechanism? For example, connection pooling, providing multiplex connections to database rather than opening database connections for each request. This will save system resources and improve scalability. Does the OTM support tens of thousands of clients? Is scalable server architecture in place to handle the volume of ‘hits’ possible from the web? Does the OTM support millions of stateful objects efficiently? And does the OTM manage, activate, and deactivate server objects with data integrity? Reliability Is data integrity ensured even when transactions span multiple databases? Or when transactions span existing legacy systems as well as new object-oriented applications? Are the ACID properties observed? Even for nested transactions? How robust is the OTM product? How often and under what circumstances does the connection binding facility or naming services fail? Security Does the OTM provide a comprehensive security system, including communication encryption, access authorization and requester authentication? What about the security of web applications? Is there additional support to enhance security? How is authentication and message integrity handled? How does a server identify a client and be sure they are who they say they are? Is security always handled in application code or can it be done declaratively? Does the OTM implement its own security or use the platform security model? Is there a separate logon process for OTM clients? Interoperability Is there support for a wide variety of clients including IIOP ORBs, object-based internet browsers, Microsoft client such as OLE/ActiveX?
-
-
• -
-
-
Is there support for client and server development in Java? Does the OTM product run on a variety of hardware platforms? Is there integration with TP monitors such as Tuxedo, CICS, and IMS and messaging software such as MQ series? Does the OTM coordinate transactions with which participants can include databases as well existing systems? How fully integrated is the OTM product with the ORB? Can the OTM product fully leverage the corresponding ORB capabilities? Such desirable features for integration include: thread pooling, logging, recovery, and data access, this will greatly reduce complexity of deployment Performance What is the overhead in object creation? The CORBA Transaction Service Specification requires many objects be created for each transaction, whether necessary or not. Does the OTM product deliver high performance and resource utilization at least as good as existing TP monitors? How efficiently does it instantiate and manage pools of object implementations? How well does it balance requests across multiple distributed copies of object classes? How quickly does it route invocations to an available implementation? How quickly does it activate standard services such as naming? Do co-located client and server objects exploit performance advantages?
3.2 Build technologies and operational management • -
• -
-
Fault Management Can the OTM keep applications working despite client, process, computer, and network failures? Is there an automatic recovery mechanism for application failures, transaction failures, network failures, and node failures? How is recovery done? Is user intervention required? What recovery services or management support is provided? Advanced Functionality/Usability Is there support for transactions involving multiple objects? Transaction which spans multiple datasources? Does it include one-phase and distributed two-phase transaction commits? Is there XA support that works with a variety of databases and other resource managers? What is the level of ease of administration and deployment?
• -
-
• -
-
Is there an integrated development environment? System Management Is there management and monitoring tools for querying the status and completion of transactions, terminating a transaction? Is deployment flexible? Does the OTM service daemon need to be installed on all nodes? Can components be distributed as both a shared library and as an executable? Support for Industry Standards Does the OTM support CORBA development and communication standard? Or any other de-facto standards? Can objects communicate with objects on other subsystems through IIOP? Is there support for popular languages such as Java, C++? Is there support for XA and non-XA communication with resources? Can you use standard tools for development?
Some of the above issues have overlapping areas of concern, and some present conflicting requirements. For example, interoperability with many other products means that a single enterprise-wide development environment is difficult to achieve. Such trade-off issues must be analyzed and balanced to suit individual organization or project’s needs.
Further, in order to test for fault handling capabilities, sample scenarios were created, e.g.: • server failure, what happens? - time to restart server - time to resolve in-flight transactions - client behaviour • name server fails? - is failover possible? - time to failover - client behavior These run-time measures enable us to test for some of the evaluation criteria discussed in section 3.
4.1 OrbixOTM test configuration Care has been taken to design the evaluation environment so as to reduce the variables involved. All tests will be carried out on the same platform, using the same compilers and database. The TPC-C application was implemented using: • OrbixOTM 1.0c • Microsoft Windows NT 4.0/Visual C++ 5.0 • Microsoft SQL Server 6.5 The hardware used for the OrbixOTM test were: • Dell, 260Mhz Pentium clients, 128 MB RAM • Dell, dual 400Mhz Pentium servers, 0.5GB RAM • 10Mbit ethernet
3.3 Run-time measures Database Server Machine
4. Evaluation results of OrbixOTM Six object transaction monitors will be evaluated for the MEP project. These are: • OrbixOTM • Encina++ • M3 • Forte • MTS • Inprise ITS Preliminary results have been obtained for OrbixOTM, and evaluations of Encina++, M3, Forte, and MTS are currently underway. The run-time measures to be carried out in the MEP project include the following: • time to carry out individual TPC-C transactions • overall system throughput • time for remote invocations • local and remote database access time • load balancing effectiveness • single versus multithreaded processes • overheads of security mechanisms • scalability for thousands of clients
Client Machines
Dell dual 400Mhz Pentium 0.5 GB Ram
Dell 260Mhz Pentium 128 MB Ram
ODBC/XA MS SQL SERVER 6.5
10 Mbit ethernet
Dell 260Mhz Pentium 128 MB Ram
Dell dual 400Mhz Pentium 0.5 GB Ram
Application Server Machine
Figure 2. OrbixOTM Test Hardware Configuration Two server machines were used (see figure 2): • One running SQL Server, each running up to five OrbixOTM replicated server processes • OTS servers are single threaded, using XA Two client machines were used: • host up to 30 client processes • clients run a mix of TPC-C transactions
•
bind to an object (via OrbixNames random group) at start-up and use this object throughout test Our initial investigations include the following: • load balancing with OrbixOTM/OrbixNames • test system implemented 3 TPC-C transactions (New-Order, Payment, StockLevel) • all transactions supported by a single transactional object (see figure 3) interface TPCTrans : CosTransactions::TransactionalObject { void NewOrder (in short w_id, in short d_id, in long c_id, in TorderInSeq ordsIn, out TnewOrdHeader ord_head, out TorderOutSeq ordsOut) raises (DBError); void Payment (in short w_id, in short d_id, inout long c_id, inout TcustName c_name, in float amount, in Tdate pay_date, out TcustDetails cust ) raises (DBError); void OrderStatus (in short w_id, in short d_id, inout long c_id, inout TcustName c_name, out TcustOrdInfo c_info, out OrdLineStatusSeq ordline ) raises (DBError); void Delivery( in short w_id, in short carrier_id, in Tdate deliv_date ) raises (DBError); void StockLevel ( in short w_id, in short d_id, in short threshold, out short count ) raises (DBError); void GetObjPerformanceInfo ( //…… ); void GetObjTransactionCounts ( //…… ); };
350 300 250 200 150 100 50 0 1 Client
5 10 Clients Clients New Order Paym ent Stock
Figure 3. Transaction Workload at DBMS 700 600 500 400 300 200 100 0 1 Server
2 Servers 3 Servers 5 Servers
New Order Paym ent Stock
Figure 3. TPC Transactional Object IDL
4.2 Preliminary evaluation results The first set of results deals with how much time it takes to perform SQL operations for each transaction. Figure 3 shows the transactional workload at DBMS. This test result was obtained with a 2-tier (client-ODBC-SQL Server) configuration. This gives us an indication of the ‘overhead’ of DBMS interaction.
Figure 4. OTS Server SQL Workload Figure 4 shows the OTS server load when under the test condition given in section 4.1. Figure 5 illustrates the transaction times as seen by clients.
several instrumentation questions such as how long should this interval be? Should this rebinding be done every transaction or some other nominal time value?
3000 2500
25
2000
20
1500 15
1000
Time in millisec
10 5
500
50 Clients
40 Clients
30 Clients
20 Clients
1 Server 2 Servers 3 Servers 5 Servers
10 Clients
0
0
New Order Payment Stock
Figure 5. Transaction Times at Clients Some interesting observations we have observed so far include: • Some clients finished much earlier than the other clients • Transaction times varied significantly at different clients, especially for the Stock-Level transaction • During test runs, some servers would fall idle (no workload) well before others
S1 S2 S3 S4 S5
Test 1 Test 2 Test 3 Test 4 Test 5 100% 33% 67% 46% 28% 28% 25% 18% 18% 39% 10% 21% 21% 17% 31% Table 2. System Workload at OTS Servers
We employed the ‘random selection from object group’ feature provided by OrbixOTM for binding clients to servers. Table 2 shows the system workload at various OTS servers. We can see that there is an uneven load of clients across servers, which leads to: • Some clients waiting much longer at a server than others • Sub-optimal use of system resources This indicates that there is a load-balancing problem. One way to overcome this load balancing problem is to use round-robin object groups. This is a simple solution, and it is unlikely to have a better performance in a heavily loaded dynamic system. Another possible solution is to make clients rebind to different objects at ‘some regular interval’ during their existence. This approach raises
Figure 6. Client Binding Time The next ‘natural’ question is how long does it take for the client to get a new binding? Figure 6 shows the experimental results in milliseconds. The times include the resolve operation and narrowing of returned object reference. The set up of this test was such that OrbixNames was on a different (lightly loaded) machine to clients, and there is sparsely populated name space. Hence, we can see that going to OrbixNames for a new binding is ‘not free’. In fact, it is quite a significant overhead in a high performance system. In the light of these results, a design rule-of-thumb would be to minimise remote accesses in a highperformance distributed system. This solution should also accommodate failure of servers. One other more ‘intelligent’ solution is to have client cache some or all of the relevant object references from OrbixNames at start-up, then select different object reference from the local cache for each transaction. The cache would then need to be refreshed at some configurable interval (e.g. several minutes). The complexities of this approach include: • Must handle object failures and invalidate cached references when necessary • Must handle OrbixNames/network outages • Should be thread-safe • Need to be aware of transactional affinity
5. Summary and future work Object Transaction Monitors enable organizations that are committed to an object-based strategy to develop business-critical distributed applications. Developers can thus leverage the benefits of object-oriented design and programming in building reliable and scalable applications. However, with the plethora of OTM products available
today, most of them still poorly understood in actual application use, potential users would greatly benefit from rigorously derived evidence to choose a product most appropriate for their application. We have presented an evaluation framework for object transaction monitors. The MEP project aims to compare and contrast the features and performance of a variety of available OTM products, to assist organisations in choosing a product for deployment. We have so far achieved preliminary results for OrbixOTM and found that load balancing is an important issue that needs to be explored further. The implications of client binding efficiency and naming service performance are all key issues that need to be carefully considered. Further evaluation work is being carried out on Forte, Encina++ and DCOM-MTS, with other products such as Inprise ITS and M3 to follow. In the near future, other platforms such as Solaris may be used, and more machines may be acquired to increase number of client and server machines used in this evaluation project. We aim to make the results of the MEP project publicly available via the world wide web later in the year.
References [1]
[2]
[3]
[4]
[5]
[6]
[7]
[8] [9]
Shaw, M. and Garlan, D. (1996) Software Architecture – Perspectives on an Emerging Discipline, Prentice-Hall, New Jersey. Compaq Digital Products and Services (1995) Tuxedo Enterprise Transaction Processing System 5.0, Digital Software Product Description, http://www.digital.com/info/ SP4804/ AT&T (1990) Tuxedo Transaction Processing System, News Release for January 16, 1990, http://www.att.com/ press/0190/900116.ula.html P.J.Houston, (1993) Building the Next Generation of Client/Server Systems with the Encina Monitor, White Paper, Transarc Corporation, downloadable from http:// www.transarc.com/Library/whitepapers/index.html NCR Corporation (1996) Top End Product Overview, January 1996, downloadable from http://www3.ncr.com/ support/nt/docs/top-end.doc I.Gorton, A.Liu (1999) Evaluating Transactional Object Systems with the OrbixOTM, IonaWorld Conference 1999, San Francisco, conference proceeding. Transaction Processing Performance Council (1998) TPCC Benchmark Specification Rev 3.4, downloadable from http://www.tpc.org/cspec.html Iona Technologies PLC (1998) OrbixOTM Guide, Dublin, Ireland, January. Iona Technologies PLC (1998) OrbixOTS Programmer’s and Administrator’s Guide, Dublin, Ireland, January.