Developing a Concurrent Service Orchestration Engine Based on ...

3 downloads 3960 Views 262KB Size Report
WS-BPEL (BPEL for short) represents the de-factor standard for the Web ... We propose an approach based on event-driven architecture to design BPEL engine ...
Developing a Concurrent Service Orchestration Engine Based on Event-Driven Architecture Wei Chen, Jun Wei, Guoquan Wu, and Xiaoqiang Qiao Institute of Software, the Chinese Academy of Sciences, Beijing 100190, P.R. China {wchen,wj,gqwu,qiaoxiaoqiang}@otcaix.iscas.ac.cn

Abstract. WS-BPEL (BPEL for short) represents the de-factor standard for the Web services composition. Service orchestration engines, also named BPEL engines, are in charge of executing and managing the workflow specified in BPEL. As a kind of server application, high performance under massive concurrency is necessary to design a scalable BPEL engine, and it is a challenging problem to implement a correct and highly concurrent BPEL engine. We propose an approach based on event-driven architecture to design BPEL engine and introduce the FSM (finite state machines) to describe the semantics of the BPEL process. We also test our BPEL engine and prove the improvement in capability of handling the massive concurrency comparing to the one based on the thread-based concurrent paradigm. Keywords: Web services, SOA, BPEL, Event-driven, FSM.

1 Introduction Service-Oriented Architecture (SOA) is now a prevalent architectural style for creating and integrating enterprise architecture that exploits the principles of serviceoriented computing to achieve a tighter relationship between the business and the supporting information systems [1]. New services can be made by Web services composition, and the services can communicate with each other using the standard messaging protocols. Business Process Execution Language (BPEL) [2] is now a promising and de facto language describing the Web services orchestration in form of business processes. An orchestration engine is responsible for managing the executions of the processes. As complex Web services specified by BPEL, how to orchestrate the Web services reasonably to get high performance has become a crucial issue. Additionally, BPEL has built-in constructs expressing concurrency and synchronous operations, which makes the processes specified in BPEL a kind of concurrent program [3]. It is the responsibility of the BPEL engine to efficiently handle the massive concurrency, complex synchronous operations and coordination during the process execution. Moreover, the BPEL engine should have the capability of handling thousands of concurrent processes with a good throughput. Therefore, besides the implementation of BPEL semantics completely, the engine should be well designed to achieve high performance when massive requests arrive concurrently. R. Meersman and Z. Tari (Eds.): OTM 2008, Part I, LNCS 5331, pp. 675–690, 2008. © Springer-Verlag Berlin Heidelberg 2008

676

W. Chen et al.

We choose the event-driven programming model to design the BPEL orchestration engine [4]. The solution we proposed can improve the engine’s performance under high workload. The engine also facilitates the administration of the process instances, such as debugging and runtime monitoring. This paper is organized as following. The motivation and our approach of designing the BPEL orchestration engine architecture based on event-driven paradigm is illustrated in Section 2. After that, we discuss the method we used in detail in the next three sections respectively. Section 6 gives the architecture of our BPEL orchestration engine based on event-driven paradigm. Section 7 contains the experiment and the result evaluations. Finally, Section 8 discusses related work, and Section 9 contains our conclusions and future work. In addition, our method and approach presented in this paper is based on the WS-BPEL 1.1.

2 Motivation and Our Approach With the growth of Internet services, the performance of the server applications has been raised and well studied in the past decade. As a kind of Internet services platform, the BPEL engine is also confronted with such issue. A BPEL process deployed in the engine can create multiple instances in runtime to serve multiple clients simultaneously, and in one single instance there may be concurrency according to the structured activity flow in BPEL. Thus, concurrencies can be classified in two types, namely, concurrency of process instances and the other one existing in the same process instance. There are two concurrency paradigms used to design server applications, threadbased concurrency model and event-based concurrency model. The most common design for server applications is the thread-based concurrency paradigm since it is well supported by modern languages and programming environments. Threaded programs typically process each request in a separate thread; threads are scheduled when one blocks waiting for I/O and the other is ready to run. In this way the processor is utilized at large. There are some differences when using such model to design BPEL engines. Engines will dispatch one or more threads to each process instance, and the instance will hold the threads in its lifecycle whether it is running or blocking for some responses. Fig. 1 depicted the thread-based paradigm adopted in the BPEL engine. Like other service applications, introducing thread-based model to BPEL engine also brings a series of defects, such as: • The scheduling cost of threads is considered expensive for the reason that most thread libraries rely on the kernel resource of the operating system. • The overheads associate with threading can lead to serious performance degradation when the number of threads is large. • Most modern threads are preemptive in pursuit of the fairness and responsiveness, but that must be supported by synchronization and exclusion mechanisms by using semaphores, which makes the programming actually become more error-prone (e.g., dead-lock or live-lock).

Developing a Concurrent Service Orchestration Engine

677

Process Instance requests

Dispatcher

Process Instance Process Instance

Fig. 1. Thread-based concurrency model used in BPEL engine

In practice, an improved thread-based concurrency model is used by introducing thread pools into systems. However, some other problems are involved along with the introduction of thread pools into the BPEL engine, such as: • Busy waiting: a process instance holds at least one thread in its whole cycle life even though in the state of waiting. • Unfairness: when all threads are busy or blocked, new process instances or concurrent activities can not be generated or executed. Besides such defects referred above, there are still some additional problems according to the characteristics of BPEL as following: • It is difficult to control a process instance in its runtime because of the difficulty of controlling threads. • The serialization of process instances is hard to implement, since BPEL process instances involve the context of threads. In contrast to thread-based paradigm, a server application in the event-driven model is structured as a collection of callback functions, each reacting to a particular event such as the client requests arriving. The server application based on such model is driven by a loop that keeps getting events and executing the functions when the corresponding event arrives. In this manner the overlapping is achieved by serially executing the callbacks belonging to the different requests. Particularly, the eventdriven approach implements the processing of each task as a finite state machine, where transitions between states in the FSM are triggered by events. Fig. 2 depicts the event-driven paradigm adopted in the BPEL engine, in which each process instance can be seen as an event handler and there is an event queue attaching to it. Currently this paradigm is adopted in more and more server applications, including BPEL engines. Adopting the event-driven model in the BPEL engine can overcome most of the issues existing in the thread-based one. • An event is much light-weight and the scheduling of the callbacks is realized in the level of applications instead of the system kernel, therefore the cost of scheduling is much lower than the thread-based one. • The co-operative [18] mechanism is introduced to process multiple tasks, which makes the cost of scheduling much lower.

678

W. Chen et al.

• Busy waiting can be avoided since a process instance no longer holds one or more threads in its life cycle. • By adopting a suitable event scheduling algorithm, events in the engine can be scheduled in a comparatively fair way, thus all of the process instances can be executed rather than waiting for a long time. • Since events and states are defined explicitly in advance and all information about the current process instance can be acquired through the event handler, it is much easier to control the process instance. Event queue Process Instance request

Dispatcher

Process Instance Process Instance

Fig. 2. Event-driven concurrency model in BPEL engine

However, it assumes that event-handling threads do not block, and for this reason synchronous operations must be transformed to asynchronous ones when introducing this paradigm into the BPEL engine designing. By comparing these two paradigms, we believe that the event-driven model is more appropriate to the BPEL engine from the viewpoint of server performance. Especially under high concurrency, the predominance is obvious. Thus the approach we propose is based on the event-driven architecture. In order to import the event-driven model appropriately, there are some crucial steps we must consider, which are also the key elements to compose and implement the approach. These steps are listed as following: 1. Event-driven architecture as a paradigm focuses on the notion of an event as its primary abstraction. The events involved in BPEL are extracted first of all. 2. Process instances should be designed as event handlers to process each task. We decide to construct the instances in form of FSM. Consequently building the FSM and the corresponding states, transition conditions and operations become the second step in our approach. 3. Since the event-handling threads in this model do not block, the synchronous operation in BPEL should be transformed to asynchronous one, which is as the third matter we care about. In our approach, we extract such synchronous operations and send them to the specific agents to process, and the agents hold threads watching for such operations. All of these aspects constitute the main approach we refer to, and the sketch of our approach is shown in Fig. 3. We will discuss these issues in the next four sections in detail.

Developing a Concurrent Service Orchestration Engine

Thread Pool

Scheduler

Task Containers Task Container 1

Event Queues Events

Process Instance Process Instance

Task Container N

Dispatcher

679

Events

Process Instance

Fig. 3. The sketch of our approach

3 Extracting Events from BPEL BPEL is a high level specification language with XML syntax that describes a process's execution flow and interaction with other processes. BPEL defines fifteen activity types, among those the most important are (V denotes variables): Process ::= Process(Activity1,…,Activityn, V) Activity ::= BasicActivity | StructuredActivity BasicActivity ::= receive | invoke | reply | assign | throw | terminate | compensate | wait |empty StructuredActivity ::= sequence |switch| flow | while| pick | scope These activities can execute and interact with each other by means of receiving and sending some events, and the behavior of the process can be described as a series of state transitions along with the activities’ executions. We model the BPEL process as an event-based behavior model. This model is based on finite state machine (FSM) and describes the behaviors of processes. For this reason, extracting events from BPEL can help establish event-based behavior model using finite state machine and describe the transition using ECA rules. The event-based behavior model and the transition rules used in our approach are discussed in the next section. In our approach, events in BPEL are classified as two types: activity events and message events. The activity events characterize the internal behavior of process itself and the message events are mainly used for communication with external processes. • Activity events are generated from BPEL process instances themselves when they are being executed. For example, when an activity finishes, a completion event will be generated from this activity and sent to the process. • Message events come from resources out of the business process, such as some message received from the client, and responses returned from the partner Web services. These messages can be accepted by some activities in the process and result in some changes.

680

W. Chen et al.

All of the events in the BPEL engine belong to a certain BPEL instance. There are two basic attributes describing the events: event source and event target. • Event source attribute records the activity which generates this event. Since message events come from the outside of the BPEL instance, this attribute of such events is empty. • Event target attribute stores the destination activity that this event is sent to, and when an event arrives to the target, specified transition or process may be invoked. For each activity, three common activity events can be extracted: Activation Event (AE), Completion Event (CE) and Fault Event (FE). Activity will be activated when receiving AE and it generates CE to inform the following activity when finishing its action. FE may occur during the activity execution. According to the currency brought by flow activity and link element in BPEL, Link Changed Event (LCE) and Link Activation Event (LAE) are extracted to cope with the synchronous operations. Besides the activity events, some message events are extracted to communicate with external services, such as On Received Event (ORE), Invoke Fault Event (IFE) and Invoke Return Event (IRE). ORE represents the messages from Web services invocations or requests from clients. IFE and IRE represent the result from the invocations of partner Web services. When the invocation succeeds, IRE will return, and otherwise the IFE will occur. The meanings of these typical events are shown as Table 1. Table 1. Typical events in BPEL Type

Event name Activation Event

Activity Event

Short name AE

Completion Event Fault Event

CE

Link Changed Event Link Activation Event

LCE

On Received Event

ORE

Invoke Fault Message Event Event Invoke Return Event

FE

LAE

IFE

IRE

Description activating an activity and coming from the parent of current activity indicating the completion of an activity and being sent to parent from current activity generated and sent to ancestor scope element when an activity execute fails generated and sent to flow activity when a source activity of a link finishes generated from flow activity and sent to the target activity of the link when the link changes describing the Web service invoking information and being sent to receive or activities onMessage tags indicate generated and dispatched to the corresponding invoke activity when a Web services invocation fails generated and dispatched to the corresponding invoke activity when a Web services invocation succeeds

4 Mapping BPEL Instance to FSM According to WS-BPEL 1.1, the behaviors of business process can be described as a series of state transitions, and the transitions are aroused by the executions of activities in

Developing a Concurrent Service Orchestration Engine

681

BPEL. When an event arrives, the corresponding activity executes, and which may change the contexts in process instance and indicate the state transitions. Thus we can construct the BPEL process instance model using finite state machine (FSM) [7], which is based on the executing states of each activity and the relationship between them. A process can be constructed as a tree according to the BPEL syntax. The root of the tree is the node process and each child node represents an activity in the process. In order to represent the BPEL process we must design atom FSM for each node in the process tree at first, and the atom FSM can generate event; after that, edges between atom FSMs are added, which indicate relations between activities. Fig. 4 is an illustration of the BPEL FSM tree, and every node in the tree can receive and produce events. Following the edges’ direction in the tree, an event can be sent to the target atom FSM. And along with the state changing of the atom FSM, the whole BPEL process transits its consequently. Process Atom FSM

Sequence Atom FSM

Receive Atom FSM

Scope Atom FSM

Reply Atom FSM

Sequence Atom FSM

Invoke Atom FSM

Invoke Atom FSM

……

Fig. 4. BPEL FSM model

Definition 1. The atom FSM in our approach is defined as a 6-tuple M = (Σin, Σout, V, T, s, F, Δ), in this 6-tuple: − Σin the set of input events − Σout the set of output events − V the set of inner variables, constrain the ECA transitions − T the set of states − s initial state − F end state − Δ the set of transitions, based on the ECA rules

: : : : : : :

682

W. Chen et al.

∈ ∈

Each transition t Δ is represented as a triple t = (q, [C] E/A, q'), which is a triple used to represent a transition t. In this expression, q T represents the source state before transition; q' F represents the target state after transition. The transition between q and q' is annotated with an ECA rule in the form of [C] E/A, where E represents events arrive to FSM; C represents conditions when transition happens and A represents activities result in state transitions. For the sake of brevity, only two examples, receive activity FSM and flow activity FSM, are given to illustrate the mapping method from BPEL activity to FSM, representing the basic activity and structured activity respectively.



Example 1 -- Receive Activity FSM

Receive is a basic activity in BPEL, and the Receive FSM model o is as following. − Σin = { Activation Event, Link Activation Event, OnReceived Event} − Σout = { Completion Event, Fault Event, Link Changed Event } − T = { Initial, Active, Running, Completed } − V = { joinCondition, hasIncomingLinks, targets, sources} − s = { Initial } − F = { Completed } − Δ the set of transitions represented in Table 2



Table 2. Transitions of Receive FSM Source State

Event

Initial

AE

Active

Condition hasIncomingLinks=false hasIncomingLinks=true joinCondition=false ∧ suppressJoinFailure=true

LAE joinCondition=false ∧ suppressJoinFailure=false joinCondition=true

Running

ORE

Action

set the transitioi nCondition of co rresponding link source false; send a LCE to the link source of fl ow; send a FE to its d irect scope; receive input val ue and set variab les; send a LCE to flow; send a CE to parent node;

Target State Running

No. (1)

Active Completed

(2) (3)

Completed

(4)

Running Completed

(5) (6)

The state transition chart of Receive FSM is represented in Fig. 5. Receive FSM can transit its state between states initial, active, running and completed according to the event it receives and the current state it holds.

Developing a Concurrent Service Orchestration Engine

683

(1) (2) Active

Initial

(5)

Running

(6)

Completed

(3),(4) Fig. 5. State transition chart of Receive FSM Table 3. Transitions of Flow FSM Source State

Event

Initial

AE

Active

LAE

Conditions hasIncomingLinks=false hasIncomingLinks=true joinCondition=false ∧ suppressJoinFailur= true

joinCondition=false ∧ suppressJoinFailure= false joinCondition=true

CE

finishedChildrenNum =childrenSize

Running LCE

Action

set the transitioin Condition of corr esponding link so urce false; send a LCE to the link source of flo w; send a FE to its di rect scope; send a AE to all of its child activities; send a LCE to its direct flow ancest or; send a CE to its p arent activity; change the link v alue; send a LAE to the link target;

Target State Running Active Completed

No. (1) (2) (3)

Completed

(4)

Running

(5)

Completed

(6)

Running

(7)

Example 2 -- Flow Activity FSM

Flow is a structured activity in BPEL, and the Flow FSM model is as following. − Σin = {Activation Event, Link Activation Event, Completion Event, Link Changed Event} − Σout = {Activation Event, Completion Event, Fault Event, Link Changed Event Link Activation Event } − T = {Initial, Active, Running, Completed} − V= {joinCondition, hasIncomingLinks, targets, sources, finishedChildrenNum, chil drenSize, suppressJoinFailure, links} − s = {Initial }

684

W. Chen et al.

− F = {Completed } − Δ the set of transitions represented in Table 3



The state transition chart of Flow FSM is represented in Fig. 6. Flow FSM can transit its state between states initial, active, running and completed according to the event it receives and the current state it holds. (1) Initial

(2)

Active

(5)

Running

(6)

Completed

(7) (3),(4) Fig. 6. State transition chart of Flow FSM

5 Transforming Synchronous Invocations to Asynchronous Ones There are some time-consuming tasks in the BPEL process, such as 1) synchronous Web services invocations, 2) wait activities and timer tasks generated from onAlarm activities. In the thread-based concurrency paradigm, these tasks must be executed synchronously and the threads in charge of these tasks have to be blocked until the tasks are completed. While applying the event-driven architecture in the engine, there should not be synchronous operations and blocking tasks, which lead us to design some specific agents to process such time-consuming works. The agents for time-consuming tasks in our approach are designed independently. We have designed timer task agent and invoke task agent up to now. Timer task agent holds the specific threads to watch for timer tasks in the engine, and invoke task agent becomes the only proxy interacting with the partner Web services. All of these tasks are produced by BPEL FSM and stored in queues named task containers. Agents get tasks from these queues and generate the corresponding message events when operations finish. Finally, the message events will be sent back to the BPEL FSM to drive the FSM working continuously. In this way, the synchronous operations can be changed to asynchronous ones. For example, when a synchronous invocation occurs, it is left to an agent and the other FSM can be scheduled to work rather than stay for reply; until the agent finishes this work, it will notify the corresponding FSM and the FSM will be scheduled in a proper time.

6 Architecture of Our Service Orchestration Engine The BPEL orchestration engine architecture we proposed is shown in Fig. 7. Our engine is constructed on top of application servers, which offers the runtime environment to our engine. There are some basic modules in our engine, such as event

Developing a Concurrent Service Orchestration Engine

Connector

Managers

Runtime Engine Process Manager

Invoke Manager

Schedule Manager

BPEL Instance2 (FSM) BPEL InstanceN (FSM)

Timer Manager

Event Queues

Administrate Console

BPEL Instance1 (FSM)

685

Task Container

Application Server

Fig. 7. BPEL orchestration engine architecture

queues, FSM representing process instance, schedule manager and some other managers. Most of the components reveal the characteristics of event-driven architecture. The relationship and interaction between these parts are depicted in Fig. 7. In the architecture of our system, a scheduler named schedule manager is set as the control center of the engine. The schedule manager in our engine is derived from the definition of the controller proposed in [6], and the controller refereed in [6] is used to affect scheduling and thread allocation. Although there are no stages divided in our approach, we set the schedule manager in charge of dispatching threads and events to process instances to drive the FSMs working. In stead, a FSM dose not holds thread except in running time. When handling events, a FSM requires threads and events from the schedule manager and acts following the ECA rules [8, 9] it contains. Once the event handling finishes, the thread must be returned back to the schedule manager. As we discussed previously, some agents are set in our engine to cope with timeconsuming tasks. The agents in our engine are named invoke manager and timer manager. All these agents hold given thread and watch on the task containers they are in charge of. The agents get tasks from container in loop, which are generated from the process instance. As far as the disposal of tasks finish, some events are produced and sent back to the event queues of process instances. There are also some other modules in our architecture including runtime engine, connector and administrate console, which offer some assistant functions such as controlling, communicating with clients. For the sake of brevity, these modules are not discussed in detail here. Besides the high performance brought by the event-driven architecture, there are also some other advantages due to the system architecture we employ. • Event queues can be seen as a center in the engine’s architecture, and almost all modules in our engine communicate with each other through them. In this manner, event queues improve code modularity and simplify the application design. For instance, a process instance and the scheduler interact to each other through the event queue other than communicate directly.

686

W. Chen et al.

• The scheduler as a controller is responsible for resources controlling, and we have implemented two functions, which are adjusting the number of threads in a thread pool and the number of events processed by each invocation of the event handler (the batching factor). Currently, we can adjust these factors statically through a configuration file and making the resources tuning dynamically according to the load condition is our future work. All of these methods make our engine configurable. • We can get the snapshot of each instance easily through the scheduler because FSMs store necessary information about the current process instances. Consequently, we can through the scheduler to control process instances such as pausing, resuming, canceling and etc, as long as we design some controlling messages events and send them to the instances, all of which makes debugging and controlling much easier to realize. • The event scheduling strategies used by scheduler can be implemented as module and pluged into our system, which make the engine extensible.

7 Evaluation Our orchestration engine, named OnceBPEL2.0, was improved based on the previous version named OnceBPEL1.0, which is designed on the thread-based concurrency paradigm we referred previously. Since BPEL is a relatively new language, there are currently no standardized BPEL benchmarks that we could use in our performance evaluations. However, we will do tests based on the 1.0 and 2.0 version respectively and evaluation the results between them. 7.1 Test Example The intention of our tests is to evaluate the engine’s capabilities under highconcurrency environments. Since BPEL engine can invoke the partner Web services through invoke activity, there some uncertain factors influence the engines’ performances such as bandwidth of network and the qualities of Web services. In order to get the performance results more accurately, we will exclude such factors as much as possible in our test case. In the test case, we did not use invoke activities to avoid Web service invocation. All of the tests we did were in the same hardware conditions and network environment. The emphasis of the test case was the performance of high-concurrency. In the BPEL definition of the test case, flow activity was used to create concurrent subprocesses, which contained 10 while activities. We used while activities blocks to simulate the real business process, which includes some assign activities. In each while activity, the specified operations were executed 20 times iteratively. It can be seen that each instance contains 11 processes, 1 main process and 10 sub-processes. In this test, we generated virtual users through load test tool from 10 up to 200. The Fig. 8 is the definition structure of the test case.

Developing a Concurrent Service Orchestration Engine

687

process sequence receive assign flow while

while

….

while

reply

Fig. 8. Structure of test case

7.2 Results Evaluation We compared and analyzed the test results from aspects of average response time and average throughput in order to evaluate the improvement achieved by applying the event-driven architecture in our BPEL orchestration engine. The average response time comparison of OnceBPEL1.0 and OnceBPEL2.0 was shown in Fig. 9. It is obvious that the average response time of OnceBPEL2.0 is shorter than OnceBPEL1.0 no matter how many virtual users were generated. Moreover, we can see that along with the increase of concurrent virtual users, the average response time of OnceBPEL1.0 increased dramatically, while the average response time of OnceBPEL2.0 changed smoothly. When the virtual users increased over 100, the difference of the average response time was great. As 200 virtual users running concurrently, the average response time of OnceBPEL1.0 was 54.024 seconds while the value of OnceBPEL2.0 was only 15.821 seconds. We can get that the average response time was declined about 80% according to the comparison. The difference of average throughput between OnceBPEL1.0 and OnceBPEL2.0 can be seen clearly in the Fig. 10. Along with the increase of concurrent virtual users, the average throughputs of OnceBPEL1.0 and OnceBPEL2.0 both increased at first, and then the value of OnceBPEL1.0 declined slowly after it reached the peak value 3627 bytes per second when 50 virtual users sent requests concurrently; on the contrary, the average throughput of OnceBPEL2.0 always increased slowly with the increase of virtual users. The difference of the average throughput between them was obvious and the average throughput of OnceBPEL2.0 was promoted about 200% comparing with OnceBPEL1.0 when 200 concurrent requests happened. According to the tests we did and the comparisons of test results, we can see that the performance under high-concurrency has been improved dramatically according to the comparisons between average response time and average throughput. The improvement we achieved mainly lies in the event-driven architecture we used rather than the thread-based, and the architecture adopted can reduce the system resources, such as thread, consuming drastically. Besides this, some optimization methods was used such as object pool technique, which also reduce the system cost to some extent.

W. Chen et al.

OnceBPEL2.0

200

160

140

120

100

80

60

40

20

50 45 40 35 30 25 20 15 10 5 0 0

Average Response Time (sec.)

OnceBPEL1.0

180

688

Virtual User Fig. 9. Average response time

OnceBPEL2.0

200

180

160

140

120

100

80

60

40

20

10000 9000 8000 7000 6000 5000 4000 3000 2000 1000 0 0

Average Throughput (bytes/sec.)

OnceBPEL1.0

Virtual User Fig. 10. Average throughput

8 Related Work There has been considerable research effort paid to BPEL orchestration engine development. The mainstream BPEL engine can be classified into open source, such as ActiveBPEL Engine [14] and BPWS4J [15], and commercial ones, such as BPEL Process Manager [16] of Oracle and WebSphere Process Server [17] of IBM. Although there are numbers of productions or prototypes few of them published the

Developing a Concurrent Service Orchestration Engine

689

internals of the implementations. We just found introductions about BPWS4J and BPEL-Mora in [12] and [13]. Both of them describe the implementation overall focusing on the design and architecture, and we also found both adopt the event-driven architecture to some extend. However neither of them provides the detail on how the concurrent semantics in BPEL were implemented. In [10] an approach to design a BPEL orchestration engine based on ReSpecT tuple centers, which was extended form LINDA, was proposed. In this approach, the BPEL semantics are expressed as logic tuple rules and the processes run as series of reactions. Moreover, [11] presents a WS-BPEL engine prototype built upon the eventdriven architecture and join patterns provided by the Microsoft Concurrent Coordination Runtime (CCR), in which BPEL concurrency semantics are translated to CCR ports and the processes execute on CCR in event-driven manner. Comparing these two approaches, the concurrency models are similar in essential.

9 Conclusion and Future Work Designing and developing BPEL orchestration engines is a hard work because of the complexity of managing interaction and concurrency in a general way. In this paper we have proposed an approach to design orchestration engine based on event-driven architecture. We represent the mapping from BPEL activities to FSM, the designing of state transition rules based on ECA and the creation of agents for time-consuming works. In addition, the performance and extendable capability of this engine have been enhanced. In the future work we plan to test our engine with large numbers of BPEL applications in order to verify and improve this engine sufficiently. We are considering to do some work on the strategies of BPEL events scheduling to suit the complex situations when engine is running, and the strategies can be modularized and pluged into our engine. Since the states of process instances are much easier to be caught in our system, we will make an effort in the aspect of BPEL processes monitoring and controlling, which is an important direction in our future work. Acknowledgments. This work is supported partially by the National Natural Science Foundation of China under Grant No. 60673112, the National Grand Fundamental Research 973 Program of China under Grant No.2002CB312005 and the High-Tech Research and Development Program of China under Grand NO.2006AA01Z19B.

References 1. Papazoglou, M.P.: Service-oriented computing: Concepts,Characteristics and Directions. In: 4th International Conference on Web Information Systems Engineering (WISE), pp. 3– 12. IEEE Press, New York (2003) 2. Tony, A., et al.: Specification: Business Process Execution Language for Web Services version 1.1 (2003), http://www.ibm.com/developerworks/ library/specification/ws-bpel/

690

W. Chen et al.

3. Yan, J., Li, Z.J., Sun, W., Zhang, J.: BPEL4WS Unit Testing: Test Case Generation Using a Concurrent Path Analysis Approach. In: 17th International Symposium on Software Reliability Engineering, pp. 75–84 (2006) 4. James, M.G., Oliver, S., Ashish, J., Mark, L.: Enterprise Service Oriented Architectures: Concepts, Challenges, Recommendations. Springer, Netherlands (2006) 5. Christensen, E., Curbera, F., Meredith, G., Weerawarana, S.: Web Services Description Language (WSDL) 1.1 (2001), http://www.w3.org/TR/wsdl 6. Matt, W., David, C., Eric, B.: SEDA: An Architecture for Well-Conditioned, Scalable Internet Services. In: 18th Symposium on Operating Systems Principles, pp. 230–243 (2001) 7. John, E.H., et al.: Introduction to Automata Theory, Language, and Computation, 2nd edn. Addison-Wesley, Reading (2001) 8. Bae, J.S., Bae, H., Kang, S.-H., Kim, Y.H.: Automatic Control of Workflow Processes Using ECA Rules. IEEE Transactions on Knowledge and Date Engineering 16(8), 1010– 1023 (2004) 9. Goh, A., Koh, Y.-K., Domazet, D.S.: ECA Rule-Based Support for Workflows. Artificial Intelligence in Engineering 15(1), 37–46 (2001) 10. Cabano, M., Denti, E., Ricci, A., Viroli, M.: Designing a BPEL Orchestration Engine Based on ReSpecT Tuple Centres. In: Proceedings of the 4th International Workshop on the Foundations of Coordination Languages and Software Architectures, pp. 139–158 (2005) 11. Lu, W., Thilina, G., Dennis, G.: Developing a Concurrent Service Orchestration Engine in CCR. In: International Conference on Software Engineering, pp. 61–68 (2008) 12. Curbera, F., Khalaf, R., Nagy, W.A., Weerawarana, S.: Implementing BPEL4WS: the Architecture of a BPEL4WS Implementation: Research articles. Concurr. Comput.: Pract. Exper. 18(10), 1219–1228 (2006) 13. Gunarathne, T., Premalal, D., Wijethilake, T., Kumara, I., Kumar, A.: BPEL-Mora: Lightweight Embeddable Extensible BPEL Engine. In: Emerging Web Services Technology, Halle, Saale, Germany (2007) 14. ActiveBPEL, http://www.active-endpoints.com/active-bpel-engineoverview.htm 15. BPWS4J, http://www.alphaworks.ibm.com/tech/bpws4j 16. Oracle BPEL Process Manager, http://www.oracle.com/technology/ global/cn/products/ias/bpel/index.html 17. WebSphere Process Server, http://www-306.ibm.com/software/integration/wps/ 18. Adya, A., Howell, J., Theimer, M., Bolosky, W.J., Douceur, J.R.: Cooperative Task Management without Manual Stack Management. In: Proceedings of the General Track of the annual conference on USENIX Annual Technical Conference (ATEC 2002), pp. 289–302 (2002)

Suggest Documents