Evaluation of Current Java Technologies for Telecom

1 downloads 0 Views 327KB Size Report
number of Voice over IP (VoIP) calls a softswitch can process per second) and low ..... a free available implementation of a SIP Proxy that makes use of the ...
Evaluation of Current Java Technologies for Telecom Backend Platform Design Bruno Van Den Bossche, Filip De Turck, Bart Dhoedt, Thierry Pollet(*), Bert Van Vlerken(*), Johan Moreels(*), Nico Janssens(*), Piet Demeester, Didier Colle (*) Ghent University - IBBT – IMEC Alcatel, R&I Department of Information Technology Francis Wellesplein 1 Gaston Crommenlaan 8 bus 201 2000 Antwerpen, Belgium 9050 Gent, Belgium

The Java programming language and more specifically the J2EE platform has evolved towards the most important software framework for designing and implementing business logic on a telecom backend platform. However, the real time aspects of J2EE based telecom applications are often questioned. In 2004, a new Java-based application server technology, JAIN SLEE has been standardized, which seems a promising candidate for the development of real time telecom applications. This paper functionally compares service enabling platform designs based on J2EE, telecom specific application servers and JAIN SLEE. These three technologies have been subject to a functional and performance evaluation study and this paper presents the evaluation procedure and obtained evaluation results. Moreover, the influence of the Java Virtual Machine tuning parameters has been investigated and will be reported upon in this paper. Keywords: Software Performance, Evaluation and Testing, Distributed Architectures, Scalability Studies, Network Management and Control.

1 JAVA FOR TELECOM The Java language is highly structured, strongly typed and object oriented. It’s not compiled to machine specific instructions but a “byte code”, which is then executed on a virtual machine, available for all sorts of platforms. This makes Java highly portable although at a certain performance cost. Another feature with important implications is the use of a garbage collector. Java includes automatic memory management (garbage collection) as a part of the Java runtime. This means that many very common errors made by developers related to memory management cannot occur. However, since the garbage collection is a part of the Java runtime, it is not completely under the control of the application developer. As telecom applications typically have very specific requirements regarding high throughput (e.g. the number of Voice over IP (VoIP) calls a softswitch can process per second) and low latency (e.g. the setup of a call should be very fast) Java might not seem to be the best

possible solution for designing telecom backend applications or Service Enabling Platforms (SEP). Issues with the Java garbage collection and the possible solutions using the appropriate tuning of the Java Virtual Machine will be addressed in this paper. As handling of VoIP calls is an important example of current telecom applications we evaluated the available Java technologies by studying the performance of Session Initiation Protocol (SIP) Proxy applications. The Session Initiation Protocol is typically used for the setup and teardown of VoIP calls. To meet the requirements of the Telecom industry the JAIN [1][2][3] (Java APIs for Integrated Networks) initiative is providing us with an extensive set of standardized APIs for network related applications. JAIN builds on the portability of Java and standardizes the signaling layer of the communication networks in the Java language. It defines a communications framework for services to be developed, tested and deployed. JAIN gives us important advantages such as service portability. Standard Java interfaces allow for portable applications, reducing the cost of development and maintenance. JAIN creates an environment for developers and users to create and use applications guaranteed to run on standards conformant networks. The focus of the JAIN effort is to take the telecommunications market from many proprietary systems to a single open environment. All JAIN technologies are developed using the standard Java Community Process [4] (JCP). This is a formal, four stage, process for developing or revising Java technologies. An initiation called a Java Specification Request (JSR), a proposal for the specification of a new technology, is submitted. If the proposal is approved a group of experts is formed. They will release an early draft for the specification which can be reviewed by the community. Next, using the gathered feedback a public draft goes out for review. After further revisions and approval of the draft a Reference Implementation (RI) and Technology Compatibility Kit (TCK) will be created before final approval. The Reference Implementation is a real implementation of the Java Specification and the TCK is a

test suite that implementations must pass to claim compliance with the specification. The last phase will be the maintenance. This means further enhancements, clarifications and revisions can be made to the specification. This paper will discuss and compare three JAIN technologies, namely JAIN SIP, SIP Servlet and JAIN SLEE. All these technologies can be used to build a telecom backend application, capable of executing service logic, providing generic capabilities, such as service subscription, service installation, activation, accounting and interaction with OSS, BSS systems. Furthermore, the technologies considered in this paper are related to SIP or they can be used in a SIP environment. Therefore, SIP will be briefly explained first. Next a discussion of the technologies will be presented, highlighting the architecture, features, relations with other existing technologies and each other. Next a functional comparison will be made, followed by a performance comparison.

2 SIP The Session Initiation Protocol (SIP) is defined in RFC3261 [5] and describes an application-layer control (signaling) protocol for creating, modifying, and terminating sessions with one or more participants. These sessions include Internet telephone calls, multimedia distribution, and multimedia conferences. SIP is used in most VoIP systems. But SIP can be used for much more, it can be used to invite participants into already existing sessions. Media, such as video streams, can be added and removed from existing sessions and SIP transparently supports name mapping and redirection services which supports personal mobility. Users can maintain one single externally visible identifier regardless of their network location. RFC3261 also defines the use of SIP proxies and how they should interact with other SIP applications and proxies. A SIP proxy will also be the use case for the benchmarks discussed in this paper. The proxy, being an important link in SIP communication should give us a good overview of the capabilities of the tested technologies and platforms. The scenario used for the testing and benchmarks is the Proxy 200 test shown in Figure 1. This is one of the benchmarks defined in the SIPstone benchmark [6]. We are interested in the time it takes to set up a call. This is the time it takes from the initial INVITE of Alice till she receives an OK from Bob, indicating the call is set up. After this Alice will send and acknowledgement to Bob saying she received the OK and the media session (e.g. voice or video conference) can start. As soon as this media session has ended a BYE is sent and acknowledged to end the call. Note that the BYE may be sent by both parties. When benchmarking no media session was initiated and the call was terminated immediately after it was established.

Figure 1. Proxy 200 Test compliant to RFC3261. This means an API is defined to handle and simplify any SIP related communications by supplying the necessary classes and methods. With this API a developer does not need to parse the SIP messages himself. The use of a common library reduces development cost and enhances portability. The use of a common library does not only mean a reduced development cost, it also means an improved portability. Any application compliant to the JAIN SIP specification will work with any compliant implementation of this stack. A Reference Implementation (RI) of the stack is provided by the National Institute of Standards and Technology (NIST). Together with the RI a Technology Compatibility Kit (TCK) is provided. This TCK consists of tests, tools, and documentation that allow an implementer of the JAIN SIP Stack to determine if their implementation is compliant.

3 JAIN SIP

Figure 2. JAIN SIP Architecture

JAIN SIP as described in JSR 32 [7] (JAIN SIP API Specification) defines a SIP Protocol Stack in Java

Figure 2 details the JAIN SIP architecture with an application being built around the SIP stack. This stack is

able to send and receive SIP messages. If we want to change the stack all that is required is plugging in another JAIN SIP compliant stack.

4 TELECOM SPECIFIC APPLICATION SERVER: SIP SERVLET The SIP Servlet specification is described in JSR 116 [10] and provides us with a container in which Servlet based applications can be deployed. The SIP Servlet API was defined to simplify the development of SIP enabled applications. By using the already existing servlet architecture it’s relatively easy for developers who are familiar with HTTP Servlets to create SIP enabled applications. Basically for each type of SIP message a method is defined to handle it. One example is a “doInvite” method which will handle SIP INVITE messages.

4.1 Architecture As HTTP Servlets and SIP Servlets are very similar technologies, it’s no surprise the basic architecture of a SIP Servlet container is very similar to the one of a HTTP Servlet container as well, or similar to the web container of a J2EE server for that matter. Figure 3 shows the architecture of a SIP Servlet Application Server. The server would consist of a Servlet container providing the SIP Servlet APIs. In the container multiple applications can be deployed, each containing one or more SIP Servlets. Each application also contains a deployment descriptor which contains information about the application and tells the container which SIP messages it should direct to the Servlets. Servlets can then process the SIP messages and if necessary pass them on to other servlets. Furthermore multiple servlets may process the same SIP message. Internally the container uses a SIP Stack for communication with the outside world and to translate the SIP messages to the appropriate SIP Servlet method calls. This stack may be JAIN SIP compliant, but the specification does not require this at all. The used SIP Stack is hidden from the SIP Servlets. One reason to choose for a JAIN SIP compliant stack might be to allow easy replacement of this stack (e.g. when using a third party stack). The stack used has no influence at all on the portability of the SIP-Servlets. The SIP Servlet API provides all methods necessary to build fully fledged SIP Applications.

4.2 Relation to HTTP Servlet As already mentioned SIP Servlet and HTTP Servlet are very similar, both technologies build on the same servlet base. As HTTP Servlets were designed to simplify the development of dynamic web based applications, SIP Servlets are designed to simplify the development of dynamic SIP based applications. Both technologies use a request response model. The main difference lies in the synchronicity of the calls. Where a HTTP Servlet only handles synchronous calls a SIP Servlet also needs to handle asynchronous

Figure 3. Telecom Specific Architecture: SIP Servlet communications. With synchronous communication we mean that in a call the caller waits until the callee replies. In an asynchronous situation the caller does not always need to wait or the callee may initiate the communication itself when it is ready. Luckily these issues are handled by the container. HTTP Servlets are only hosted on “origin servers” which generate a final response. SIP Servlets also need to be able to route and initiate requests. Another key difference is that SIP Servlets sometimes need to generate multiple responses to one request (e.g. a provisional response followed by a final response) and they might receive both responses as well as requests.

5 JAIN SLEE The JAIN SLEE specification, defined in JSR 22 [12] and JSR 240 [13], provides us with an application server tailored for telecom. There’s a certain similarity between the already proven J2EE platform and the JAIN SLEE model. There are however some important differences as well. We will first give an overview of the architecture of the SLEE followed by a comparison with the J2EE architecture.

5.1 Architecture The basic architecture of a JAIN SLEE application server is shown in Figure 4 and is similar to the J2EE architecture. Applications in a JAIN SLEE are hosted in a container and consist of one or more Service Building Blocks (SBB). Such an SBB can be best compared with an Enterprise Java Bean (EJB) in a J2EE server. SBBs are usually organized in a hierarchy with one root SBB.

Internally almost all communication in the SLEE happens through events. The SLEE uses the publishsubscribe model for event distribution, this is a very important feature. Communication through “simple” method calls is still possible if an SBB has defined a local interface. In that case other SBBs could perform regular synchronous method calls. The idea behind the event based communication between SBBs is that every SBB performs its own task and then hands the result off to the next SBB in line. One could compare this to an assembly line in a factory. The communication with the outside world happens through the so called Resource Adaptors (RA). We could for example have an RA for SIP related communications. This RA then accepts incoming SIP messages, parses them and turns them into events understandable by the SLEE. If an SBB then wishes to reply to a SIP message it will use the API provided by the RA to construct and send this message back to the outside world. Through the use of RAs the JAIN SLEE can be protocol agnostic. This means that any protocol can be supported by adding an appropriate RA. This does not necessarily mean that any application will be protocol agnostic but certain components (e.g. a billing component) can be. All that is needed for this to happen is a set of events used to interact with these SBBs. These events can be defined and included in the applications by the developers.

Figure 4. JAIN SLEE Architecture As shown in the figure the JAIN SLEE also offers some extra services such as a Timer, Alarm and Tracing service. Next there’s also a Usage and Activity Context

naming service. An Activity Context Service could be best described as the communication plane for the event routing between the different SBBs. An application developer does not need to worry about the actual event routing. It is sufficient to describe what events an SBB may generate and to what events it will listen, the SLEE will take care of the entire event routing logic. The management of the JAIN SLEE Application server and all deployed components can be performed using the standard JMX Management Interfaces. This includes the management of the Profile Table which contains provisioned data.

5.2 Relation to J2EE As already mentioned JAIN SLEE and J2EE have a lot in common. Both are container based designs and the SBBs of JAIN SLEE are the equivalent components of the EJBs in J2EE. Both technologies also exploit the component based architecture by allowing application designers to transparently use home made and third party components in order to build applications. The use of offthe-shelf-components is made very easy. Nevertheless, there are some significant differences as well. JAIN SLEE is strongly optimized for asynchronous event-driven logic. J2EE on the other hand, was primarily designed for synchronous calls. It does have support for asynchronous logic through the use of the Java Message Service [15] (JMS). The use of JMS would require a lot of extra work when developing and when deploying if the same level of abstraction would be required. With JMS the event subscription is static and the application deployer needs to manually create and assign the message queues. With JAIN SLEE the event subscription is highly dynamic, even across multiple nodes. The limitations of the JMS would also have an influence on the efficiency of the system as one receives all published messages or none at all. In contrast with JAIN SLEE where only relevant events are delivered. Another point of comparison is the interaction with databases. In J2EE environments databases have a very important role. Persistence is provided through the use of databases and a majority of J2EE applications can conceptually even be considered as a front-end to databases (e.g. an online shop). Database transactions in a J2EE application typically occur very often and can be very complex. In JAIN SLEE applications database transactions are often very limited and very simple. Persistence is achieved using in-memory cluster replication instead of using a database. As a summary J2EE and JAIN SLEE have a lot in common considering the concepts used to design the application servers. However they are targeted at fundamentally different application domains. Enterprise applications for J2EE and communication applications for JAIN SLEE. Nevertheless, this does not mean a combination of both is not possible or useful. The JAIN

SLEE specification includes a section regarding the interaction between J2EE and JAIN SLEE applications.

6 FUNCTIONAL COMPARISON OF TECHNOLOGIES The previous sections provided an overview of the different technologies available and compared them with other related technologies. We can now compare them amongst each other. This section will functionally compare the different technologies and highlight the main differences. First of all we can make a distinction between JAIN SIP and both SIP Servlet and JAIN SLEE, where the first one is only a protocol stack the latter two are full blown container based application servers. This has some important implications.

6.2 SIP Servlet and JAIN SLEE When comparing the two container based solutions we can focus on the differences. Table 1 presents a list with the main differences between SIP Servlet and JAIN SLEE. Table 1. Comparison of the SIP-Servlet and JAIN SLEE Technology SIP Servlet Designed to simplify SIP development (container functionality built on top of SIP stack for programming convenience) Designed to be SIP specific, yet embrace HTTP Servlet (SIP is the only protocol required in the container)

6.1 Container Based Container based applications have some important advantages which can simplify the development and deployment of applications. These advantages include container provided scalability and resilience. When developing an application using JAIN SIP you will need to include support for scalability. Depending on the needs of the application and the target platform, this may range for adding support for multiple threads to support for clustering multiple platforms. SIP Servlet and JAIN SLEE by nature support multithreaded execution of applications and clustering of the application servers. For SIP Servlet the application only needs to be marked distributable, for JAIN SLEE even this is unnecessary. The developer does remain responsible for synchronizing the access to shared objects in the case of SIP Servlet. With JAIN SLEE there’s the special case that an SBB Developer can specify that an SBB component is non-reentrant. If an SBB entity of a non-reentrant SBB component is executing in a given transaction context, and another method invocation with the same transaction context arrives for the same SBB entity, the SLEE will throw an exception to the second request. This rule allows the SBB Developer to program the SBB component as single-threaded nonreentrant code. Regarding resilience the above also applies. When using JAIN SIP you’ll need to build it into the application manually, whereas with SIP Servlet and JAIN SLEE the container will handle almost everything. If a Servlet or SBB causes a fatal exception only the current session will be affected. With JAIN SIP this is the responsibility of the developer. The JAIN SLEE API includes a number of call backs that applications may implement if they wish to be notified of rolled back transactions or exceptions. This means the SLEE allows applications to be involved in error recovery. The SIP Servlet specification does not define an equivalent mechanism.

Built upon Servlet concepts (simple request/response programming model) Servlets are stateless Shared state may be stored in a separate session object. Shared state is visible to all Servlets with access to the session No transaction support (other than on the SIP state transition level) Timer facility

Handler-based, procedural programming model without standard model for composition No management interface, no profile support

JAIN SLEE Designed for high throughput / low latency event processing (event routing integrated in component model) Designed to be a generic event processing container (abstracts component model from protocol through resource adaptors) Built upon EJB concepts (SBBs are Service Building Blocks similar in feel to EJBs) SBBs may be stateful (or stateless) SBB state is private, type safe, transacted and a property of the SBB itself Access to shared state may be specified at deploy time Supports ACID properties of per event transactions Timer, Trace, Alarm, Statistics and Usage, Profiles facility Component-based, OO programming model with Support for component composition Management interface based on JMX for upgrade, lifecycle management, profiles, …

When looking at the SIP functionality both technologies could be considered functionally equivalent. But when looking at the bigger picture, JAIN SLEE is much more than a SIP Application server. With JAIN SLEE being protocol agnostic it would for example be relatively easy to build a gateway between different protocols. All that is required is an RA for each protocol and an SBB taking care of the actual translation (which might be rather difficult, but this is regardless the technology used). In short, the usability of JAIN SLEE is much broader than SIP. SIP Servlet on the other hand is specifically designed for SIP related applications. As JAIN SLEE is the more general technology, it is also the more complex one.

Building SIP applications with SIP Servlet is very straightforward, albeit limited to SIP.

7 PERFORMANCE COMPARISON Another important aspect of comparing different technologies is the actual performance. Therefore we tested the different technologies using the Proxy 200 test as shown in Figure 1. The actual benchmarks were run against the following SIP Proxies: • JAIN SIP: NIST Proxy implementation [9]. This is a free available implementation of a SIP Proxy that makes use of the reference JAIN SIP Stack. This is definitely a viable implementation although it is not fully optimized. It is for example unable to make use of more than one CPU in a multiprocessor machine. • SIP Servlet: Reference Implementation [11]. However, the RI is not optimized for performance and it would be unfair to compare this implementation with a more mature implementation in other technologies. Since this is a very recent technology we will explain some premature performance results of a commercial implementation under development. More detailed results will be provided in future publications. • JAIN SLEE: Open Cloud’s Rhino 1.4.0 [14] was used with the included Proxy implementation

7.1 Test Setup Before we discuss the obtained results we will first give an overview of the test bed used and any special considerations taken such as the tuning of the Java virtual machine. Software For benchmarking purposes SIPp [16] is used. SIPp is a free Open Source test tool/traffic generator for the SIP protocol. It allows generating SIP traffic and to establish and release multiple calls. It can also read custom XML scenario files, describing from very simple to complex call flows. It features the dynamic display of statistics about running tests such as call rate, round trip delay, and message statistics, periodic CSV statistics dumps, TCP and UDP over multiple sockets or multiplexed with retransmission management and dynamically adjustable call rates. It includes a few basic SIPstone defined test setups. All tests in this paper were performed using SIPp and the previously specified scenario. Hardware All tests were performed using a dual Opteron 242 HP DL 145 with 2GB of memory for the proxy. The clients used consisted of AMD athlonXP1600+ machines with everything interconnected in a 100Mb switched network as shown in Figure 5. All platforms were running Debian GNU/Linux with a 2.6 kernel and the Sun JDK 1.5.0 was used.

Figure 5. Network setup for test bed

7.2 Performance Tuning: JVM tuning As important as the actual hardware platform the application is run on, are the used Java Virtual Machine and the used tuning parameters. Of great importance is the Garbage Collection. Garbage collection may lead to the whole virtual machine being paused. With requirements of response times being within 25ms a pause of the virtual machine of multiple milliseconds may be dramatic. Appropriate tuning of the virtual machine can improve the perceived latency and improve the garbagecollection results. Before we cut down to the actual tuning it might be useful to get some background on the internals of the Java Memory Management and Garbage Collection. Java Garbage Collection Internals An object in the Virtual Machine is considered garbage if it can no longer be reached through any reference in the running program. So a simple garbage collection algorithm might traverse all objects and check for references. If an object has not got any references to it, it can be considered garbage. This approach would be insufficient when dealing with large applications consisting of lots of live objects. Looking at the lifecycle of objects it shows that a large amount is very short-lived. Some objects live longer and some even last during the total runtime of the application. To take advantage of this, algorithms using generational collection were introduced. Generational Collection exploits observed properties to avoid extra work. The short lifetime of objects, also called infant mortality, is one of them. Therefore the garbage collection algorithm can focus on investigating recently created objects. In order to optimize for this scenario, memory is managed in generations, as shown in Figure 6. Three generations can be observed: Young, Tenured and Perm. The sections marked as “Virtual” are virtually reserved but not physically allocated until necessary.

Virtual

Virtual

Survivor Space 2

Survivor Space 1

Young

Virtual

Tenured

Eden

Perm

Figure 6. Structure of the Java Virtual Machine Memory The Young generation consists of Eden and two survivor spaces. New objects are always created in Eden. One of the survivor spaces will be empty at all times and serves as a destination for the next when copying all live objects from Eden and the other survivor space. Objects are moved between the survivor spaces until they are old enough to be moved to the tenured generation. When tuning the Garbage Collection two metrics of performance should be taken into consideration: • Throughput, which is the percentage of total time not spent in garbage collection, considered over long periods of time. • Pauses are the times the application seems unresponsive since the garbage collection is occurring. Taking the requirements regarding to the response time of our applications in consideration, the Pauses will be an important measure to optimize. Choosing the right sizes for the generation can already mean a great improvement. For example a large young generation might mean an optimization of the

Figure 7. Garbage collection with default JVM options. The upper line shows the amount of physically reserved memory by the JVM, the thick middle line the amount of memory actually in use and the lower line the length of the pauses caused by the garbage collection.

Throughput, but will do this at the cost of longer Pauses. Next to the standard Garbage Collector, three newer ones are available as well. The throughput collector is a parallel version of the default collector. The concurrent low pause collector is used to collect the tenured generation and does most of the collection concurrently with the application. The incremental low pause collector uses an incremental approach. This collector however has not changed since the J2SE Platform version 1.4.2 and is currently not under active development. It will not be supported in future releases. Table 2 gives an overview of the Virtual Machine options we employed for the JAIN SLEE Application Server. A full description of all possible options can be found at [17]. The most important options are discussed here. Table 2. List of Virtual Machine Options 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:MaxNewSize=32m -XX:NewSize=32m -Xms1024m –Xmx1024m -XX:SurvivorRatio=128 -XX:MaxTenuringThreshold=0 -XX:CMSInitiatingOccupancyFraction=60 -Dsun.rmi.dgc.server.gcInterval=0x7FFFFFFFFFFFFFFE -Dsun.rmi.dgc.client.gcInterval=0x7FFFFFFFFFFFFFFE -XX:+DisableExplicitGC -XX:+UseTLAB -XX:+CMSIncrementalMode -XX:+CMSIncrementalPacing -XX:CMSIncrementalDutyCycleMin=0 -XX:CMSIncrementalDutyCycle=10

Option 1 and 2 indicate that the concurrent low pause collector and the parallel version of the default (young generation) collector are used. This will reduce the pauses when collection occurs. Option 3 and 4 set the size of the young generation and the total virtual machine memory. By setting the minimum and maximum option to the same value we prevent possible resizing during the runtime of the application thus preventing occasional pauses upon resizing. Option 5 sets the size of the survivor spaces relative to the size of Eden. Setting this to 128 means the size of the survivor spaces is 1/128th of the size of Eden. Options 12 to 15 enable and set the incremental mode of the concurrent collector. These options break up the concurrent phases into short bursts of activity in order to avoid long pauses. As an example Figure 7 and Figure 8 show the behavior of the JVM garbage collection with default JVM options and optimized JVM options, respectively. The thick line shows the memory in use during the runtime of the SIP proxy application. The first figure shows that at certain moments in time major collections occur. During these collections the Virtual machine is paused for a long time, sometimes even more then one second. The top thin line shows the amount of physical memory reserved by the virtual machine. In the first case the amount of memory

gets resized, in the second case the amount of memory is fixed. Furthermore there are no major garbage collections necessary. Thus eliminating long pauses at the cost of more but significantly shorter pauses. But in the following section we’ll see that the advantages of this approach will outweigh this extra cost.

4

x 10

Response Time Repartition

0

50

2.5

Number of calls

2

1.5

1

0.5

0

100

150

Response time (ms)

Figure 9. Example of the response time distribution of the SIP calls obtained using the optimized JVM options in Table 2 unless mentioned otherwise. These options were defined using iterative tuning and carefully monitoring the throughput and pause metrics. Note that optimizing the virtual machine and garbage collection behavior is very application specific.

8 RESULTS This section gives an overview of the obtained test results. The different implementations were evaluated by a number of test runs. Each run consists of a five minute period of continuous call setups at a given call rate ranging from 10 calls per second to over 200 calls per second (caps). During all test runs the response times and processor load were measured. The average response time, the 50th percentile and 95th percentile are presented below as they give a view of the actual distribution of the response times. Figure 9 shows an example histogram of this distribution obtained with JAIN SLEE tested at 150 SIP calls per second, a total of 45000 calls and optimized JVM options. The majority of the calls (95%) are answered within 20ms. As the call rate goes up the distribution shifts more and more to the right. It is important to note that the system load presented in the graph represents the average load obtained during the entire test run. All final test results were

The JAIN SIP Presence proxy implementation is limited to the use of only one CPU on our dual CPU machine. This lack of scalability becomes clear in the obtained results. Figure 10 shows the response times for this configuration. The maximum obtained call rate without losing calls is 35 caps with a system load of 100% (measured on one CPU). Response time in relation to callrate and system load 200

100 Average response time 50th percentile 95th percentile

180 160

System load 90 80

140

70

120

60

100

50

80

40

60

30

40

20

20

10

0

0

5

10

15 20 25 Call Rate (caps)

30

35

System Load (%)

The obtained results were achieved using the SUN Hotspot Implementation of the Java Virtual Machine. Other implementations might not offer the exact same configuration options or might offer other optimizations. However, the main principles remain the same for all virtual machines.

8.1 Test results: JAIN SIP

Response time (ms)

Figure 8. Garbage collection with optimized JVM options in Table 2. The amount of reserved memory by the virtual machine is fixed (512MB), the thick line shows the memory actually in use and the thin line shows the length of the pauses caused by the virtual machine.

0 40

Figure 10. JAIN SIP with optimized JVM options and One CPU Although this is an example application which can only use one processor and definitely has room for optimizations the achieved results are relatively good. The measured response times do increase along with the

The reference implementation available for SIP Servlet performs rather poorly. It can sustain about 10 calls per second before calls get lost. Being a reference implementation it clearly was not focused on being very performant. It would be unfair to directly compare these results to a high performant implementation like Open Clouds Rhino. Currently SIP Servlet Application Servers are being commercially implemented and, at the time of writing, no stable versions have been marketed yet. Tests were performed on a preliminary commercial version which turned out to suffer from locking issues. With an early development preview version we were able to achieve and sustain a call rate of 150 caps per second without loosing any calls and getting acceptable response times. More thorough tuning and optimization might further improve these results. As we currently do not have access to a final and stable SIP Servlet Server the final results will be reported upon in a future publication.

8.3 Test results: JAIN SLEE The graph presented in Figure 11 shows the obtained results of Rhino using the optimized JVM options already mentioned in Table 2. One of the first things to notice is the almost perfectly linear correlation between the system load and number of calls per second. We see a 5% increase for every ten extra calls per second. Response time in relation to callrate and system load 100

Response time (ms)

160

70

120

60

100

50

80

40

60

30

40

20

20

10

0

20

40

60

80 100 120 Call Rate (caps)

90 80

140

70

120

60

100

50

80

40

60

30

40

20

20

10

0

20

40

60

80 100 120 Call Rate (caps)

140

160

180

0 200

Figure 12. JAIN SLEE with Default JVM options

80

140

0

160

System load

90

140

160

180

System Load (%)

180

System load

100 Average response time 50th percentile 95th percentile

180

0

200 Average response time 50th percentile 95th percentile

Response time in relation to callrate and system load 200

System Load (%)

8.2 Test results: SIP Servlet

the system load has yet to reach 50%. We assume that the explanation of these results is related to the garbage collection. As already noted the system load presented in the graph is the average obtained during the test run. In reality the system load does vary and small peaks do occur. The messages arriving simultaneously with these peaks are most likely those to suffer the small delays surfacing when looking at the 95th percentile. Until then the garbage collection and actual processing of the SIP messages did not have to interfere. Figure 12 shows the results of an equivalent test run, but without the optimized JVM options except for the enabling of the maximum virtual machine memory size to 1024MB. One of the first things to notice is the slight decrease of the system load. This proves the optimized JVM settings do pose extra requirements on the hardware and JVM. The increase of the system load however remains almost perfectly linear.

Response time (ms)

increasing system load, but remain below 30ms until the system load reaches 100%.

0 200

Figure 11. JAIN SLEE with Optimized JVM options If we analyze at the average response times and the 50th percentile there’s only a small increase as the number of calls increases. At 80 caps the average response time is approximately 4ms and the 50th percentile lies below 3ms. The 50th percentile remains at 3ms till 150 caps, at that point the average response time is approximately 8ms. The 95th percentile however starts increasing at around 80 caps, till that point it also was below 3ms. Notice

The next thing to notice is the sudden increase of response time regarding the 95th percentile starting at 100 caps. Another important difference with the previous graph is the higher average response times. At 80 caps the average response time is 8ms versus the previous 4ms. At 150 caps the average response time is 12ms versus 8ms. Near the higher call rates the difference gets smaller and the average response times are even less than with the optimized options. Another important observation is the number of calls getting lost. With the optimized options we were able to sustain a call rate of 200 caps without loosing any call. With the default options calls were getting lost already at a rate of 100caps. As a consequence, the advantages of the optimized options definitely outweigh the small cost of extra system load. Additionally, we also wanted to verify how well the JAIN SLEE scales with regard to the number of available processors. The technology claims to scale linearly to the number of processors and nodes in a cluster. Therefore we also performed test runs in which we assigned the SLEE and all child processes to one processor using the

"taskset" utility. This means all JAIN SLEE processes are run on one single CPU, but the second CPU does remain available to the Operating System, so this configuration should perform slightly better than a true single cup machine. Figure 13 shows the obtained response times. Response time in relation to callrate and system load 200

100 Average response time 50th percentile 95th percentile

Response time (ms)

160

90 80

140

70

120

60

100

50

80

40

60

30

40

20

20

10

0

0

20

40

60

80 100 120 Call Rate (caps)

140

160

180

System Load (%)

180

System load

0 200

Figure 13. JAIN SLEE with One CPU disabled If we analyze the obtained results in Figure 13 we see the same pattern as in the previous test runs. The system load increases linearly with the number of caps and at an almost 100% load the 95th percentile response time is approximately 30ms with a call rate of 70 caps. This means JAIN SLEE does scale linearly to the number of processors and even performs better on a multiprocessor platform.

9 CONCLUSION We compared three different Java SIP enabled technologies. When considering SIP related applications all three of them could meet the requirements for the design of telecom backend applications. Anyone familiar with Java and SIP will be able to use the three discussed technologies. SIP Servlet will probably be the most appealing for developers with a good knowledge of HTTP Servlets and JAIN SLEE developers with a good knowledge of J2EE will be able to quickly get familiar with JAIN SLEE. When looking at the performance it clearly shows that the container enabled platforms have an advantage when it comes to scalability. Both SIP Servlet and JAIN SLEE perform very well. Performance evaluation results have been presented and the importance of JVM tuning has been detailed.

ACKNOWLEDGEMENTS We acknowledge Charlie Crighton from OpenCloud for providing us with the recent version of Rhino.

REFERENCES [1] Sun Microsystems, “Java API’s for Integrated Networks”, http://java.sun.com/products/jain/.

[2] Sun Microsystems, May 2002, “The JAIN APIs: Integrated Network APIs for the Java Platform”. [3] John de Keijzer, Douglas Tait and Rob Goedman, January 2000, "JAIN: A New Approach to Services in Communication Networks", IEEE Communications Magazine, 38, no. 1: 94-99. [4] “Java Community Process”, http://jcp.org/ [5] J. Rosenberg, H. Schulzrinne, G. Camarillo, A. Johnston, J. Peterson, R. Sparks, M. Handley and E. Schooler, “Session Initiation Protocol,” June 2002, http://www.ietf.org/rfc/rfc3261.txt?number=3261 [6] Henning Schulzrinne, Sankaran Narayanan, Jonathan Lennox and Michael Doyle, “SIPstone Benchmarking SIP Server Performance,” April 12, 2002, http://www.sipstone.org/ [7] JAIN SIP Specification: http://www.jcp.org/en/jsr/detail?id=32 [8] “JAIN SIP Reference Implementation”, https://jain-sip.dev.java.net/ [9] “JAIN SIP Presence Proxy Implementation”, https://jain-sip-presence-proxy.dev.java.net/ [10] “SIP Servlet API”, http://jcp.org/en/jsr/detail?id=116 [11] dynamicsoft, “SIP Servlet Reference Implementation”, http://www.sipservlet.org/ [12] “JAIN SLEE API Specification”, http://jcp.org/en/jsr/detail?id=22 [13] “JAIN SLEE (JSLEE) v1.1”, http://jcp.org/en/jsr/detail?id=240 [14] OpenCloud, “JAIN SLEE Reference Implementation”, http://www.opencloud.com/ [15] “Java Message Service”, http://java.sun.com/products/jms/ [16] “SIPp : SIP benchmarking utility”, http://sipp.sourceforge.net/ [17] Sun Microsystems, "Tuning Garbage Collection with the 5.0 Java Virtual Machine," http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_ 5.html

BIOGRAPHY BRUNO VAN DEN BOSSCHE received his M.Sc degree in Computer Science from the Ghent University, Belgium, in September 2004. At the moment, he is a Ph.D. student affiliated with the Department of Information Technology of the Ghent University. His main research interests include scalable software architectures, distributed software and the automatic optimal distribution of software. FILIP DE TURCK received his M.Sc. degree in Electronic Engineering from the Ghent University, Belgium, in June 1997. In May 2002, he obtained the Ph.D. degree in Electronic Engineering from the same university. At the moment, he is a part-time professor and a post-doctoral fellow of the F.W.O.-V., affiliated with the Department of

Information Technology of the Ghent University. Filip De Turck is author or co-author of approximately 80 papers published in international journals or in the proceedings of international conferences. His main research interests include scalable software architectures for telecommunication network and service management, performance evaluation and optimization of telecommunication systems. BART DHOEDT received a degree in Engineering from the Ghent University in 1990. In September 1990, he joined the Department of Information Technology of the Faculty of Applied Sciences, University of Ghent. He is responsible for several courses on algorithms, programming and software development. His research interests are software engineering and mobile & wireless communications. Bart Dhoedt is author or co-author of approximately 100 papers published in international journals or in the proceedings of international conferences. His current research addresses software technologies for communication networks, peer-to-peer networks, mobile networks and active networks. PIET DEMEESTER received the Masters degree in Electro-technical engineering and the Ph.D degree from the Ghent University, Gent, Belgium in 1984 and 1988, respectively. In 1992 he started a new research activity on broadband communication networks resulting in the IBCNgroup (INTEC Broadband communications network research group). Since 1993 he became professor at the Ghent University where he is responsible for the research and education on communication networks. The research activities cover various communication networks (IP, ATM, SDH, WDM, access, active, mobile), including network planning, network and service management, telecom software, internetworking, network protocols for QoS support, etc. Piet Demeester is author of more than 400 publications in the area of network design, optimization and management. He is member of the editorial board of several international journals and has been member of several technical program committees (ECOC, OFC, DRCN, ICCCN, IZS, &). BERT VAN VLERKEN received his Master degree in electronics and computer engineering from the DeNayer Institute, Belgium, in June 2001. In august 2001 he joined the Research & Innovation department of Alcatel, where he now is the development lead within a team that investigates Java and standards based platforms for service delivery and execution. NICO JANSSENS received his Master of Science degree in Electronic Engineering from the Ghent University, Belgium, in 1994. In 1994, he joined the VOD (Video on Demand) test team (Alcatel) evaluating the complete end-toend solution (from server to settop-box). His main responsibility was the APON and SDH part of the network.

Later he led the test group working on the subsequent versions of APON, including the standard (G.983.1) compliant product. Beginning 2000, he joined the Alcatel Research & Innovation Centre located in Antwerp, Belgium. Since a few years his focus is towards Network services supporting emerging applications, e.g. verifying QoS Control mechanisms and the Application and Network Services Delivery Platform. Currently he is implementing the delivery platform using various Java Technologies. He is an expert of the JAIN-SLEE technology. JOHAN MOREELS received his Masters degree in Science from the Brussels University, Belgium, in July 1979. In May 1986, he obtained his Ph.D. degree in Sciences from the same university. From August 1979 till December 1986, Johan Moreels was researcher at the V.U. Brussels funded by the I.I.K.W., Belgium. From januati 1987 till july 1993, Johan Moreels was research assistant at the V.U. Brussels. He joined Alcatel in August 1993 were he worked in the architecture team of the IT departement. In October 2001 he joined the research and innovation department of Alcatel. He is now working on standards based platforms for service delivery and execution. THIERRY POLLET received a diploma degree in electrical engineering from the University of Ghent in 1989. In 1996 he joined the Alcatel Corporate Research Center, Antwerp. From September 1999, he was work package leader within Research and Innovation, in charge for the research activities on metallic access including topics such as ADSL, VDSL, “all digital loop” and network characterization. In 1999, he received the Alcatel Hi-Speed award for his contribution to the Alcatel patent portfolio in the domain of DSL. From June 2001, he is member of the Alcatel Technical Academy, as Distinguished Member of the Technical Staff (DMTS). Currently, his focus is on development of next generation service enabling platforms in converged networks. DIDIER COLLE received a M. Sc. degree in electrotechnical engineering (option: communications) from the Ghent University in 1997. Since then, he has been working at the same university as researcher in the department of Information Technology (INTEC). His research lead to a Ph.D degree in February 2002. His current research deals with design and planning of communication networks. This work is focusing on optical transport networks, to support the next-generation Internet. Up till now, he has actively been involved in several IST projects (LION, OPTIMIST, DAVID, STOLAS, NOBEL and LASAGNE), in the COST-action 266 and 291 and in the ITEA/IWT TBONES project. His work has been published in more than 90 scientific publications in international conferences and journals.

Suggest Documents