Batch Processing in Workflow: the Model and the ... - CiteSeerX

5 downloads 11851 Views 483KB Size Report
However, in practical, batch service exists commonly as one or several steps in a production or business process. Workflow management system (WfMS) is a ...
Batch Processing in Workflow: the Model and the Implementation Jianxun Liu1,2 Haiyan Chen2 Jinmin Hu2 {Hunan Knowledge Grid Lab1, School of Computer Science and Engineering2}, Hunan University of Science and Technology, Hunan 411201

Abstract. Batch service is a kind of specific stochastic service system which has broad application background. However, in practical, batch service exists commonly as one or several steps in a production or business process. Workflow management system (WfMS) is a powerful tool to support business processes modeling and execution. It has gained extensive attention from both industry and academic domain. But, it is a pity that the traditional WfMSs do not take the batch service into consideration, let alone provide the relevant supporting mechanism. This paper has done some in-depth research to introduce batch service to WfMS. It first discusses the batch service schedule model and proposes a workflow model to support it. Then, the problem and issues about improving the traditional WfMS to support batch service as well as the design and implementation details are analyzed deeply.

1 Introduction Workflow is the automation of a business process[19,21]. A WfMS (Workflow Management System) is a system that defines, creates and manages the execution of workflows through the use of software, running on one or more workflow engines, which is able to interpret the process definition, interact with workflow participants and, where required, invoke the use of IT tools and applications[20]. In a WfMS, there may exist many workflow instances (process cases) at the same time, and each process cases can be further divided into many activity instances (task instances), which are submit to a software agent or human beings for execution. With the support of workflow technology, enterprises can greatly enhance the execution efficiency of business processes[12, 22]. Thus, workflow has now become a leading tool in modeling enterprise business rules by taking advantage of continuous advancements of IT (Information Technology) [13]. A nonprofit organization, Workflow Management Coalition (WfMC), was setup to leading the development of workflow technologies[19]. A lot of research works are coming out and many WfMS products have been used as a platform to serve for the business process and office automation management [5-8,13,15,17]. However, there is few research works in workflow which focuses on modeling dynamic batch activities in business processes. That is, when there exist many business process cases at the same time, there maybe some activities instances in different cases which can be clustered together and then submit to an software agent or human being for execution. Nevertheless, such kind of business

processes is very common in practices. People usually call it as scheduling with batch [11,16] or group technology [2,18] in flow shop scheduling problems, chemical processes, and etc. Here, we just call it Batch service of business processes for there are differences between that in business processes and that in flow shop scheduling problems. It’s illustrated later in section 2. As traditional workflow aims at separating and abstracting business processes from the software systems in an enterprise to avoid hard-coding business process related aspects, such as control and data flow, into the organization’s software systems, which may lead to inflexible systems that were hard to modify and maintain [9, 13]. The contribution of this paper is that it further separates the batch service control logic from software components to let WfMS modeling and supporting processes containing batch service steps more easily and conveniently and extend WfMS application domain. The rest of the paper is organized as follows. Section 2 introduces random services systems, batch service system and their characteristics. In section 3, the scheduling model of batch service under business process environment is analyzed in detail. The modeling of business processes containing batch service steps in workflow is presented in section 4. Section 5 details an enactment service with an ECA(EventCondition-Action) based engine to support the execution of such kind of business processes. Section 6 concludes the paper and presents some advices for further study.

2 Dynamic batch service and its optimal scheduling model Batch service is a special kind of stochastic services system [1,14]. As we know, every kind of stochastic service system contains the following common components [10]: 1) arrival process, which describes the probability density distribution that determines the customer arrivals in the system., e.g., most common arrival process is the so-called Poisson arrivals where the inter-arrival times are identical independently distributed and are exponentially distributed; 2)queuing discipline, the rule for selecting customers to be waited on in the queues is called the queuing or service discipline; 3)service organization1, which indicates how many service devices (number of servers) are and how long each customer can be served for (service time distribution). It is the purpose of the research of stochastic service system that how reasonably to design and control the stochastic service system, so as to make it take the most economical expense and meet the customer demands [Kosten73]. There are many kinds of stochastic service system, such as the simplest stochastic service system, M/G/l system and specific stochastic service system, and so on. Batch service can save system resources and improve the efficiency of system processing [14]. Hence, among specific stochastic service systems, the system supporting batch service has more extensive application background. However, in practice, systems for batch processing don’t exist independently. They may be shown as one or several steps in a real process flow, such as painting during production process, delivery in a business process driven by order, and so on. 1

In the following description, we think that service organization, executor, and agent are the same concept.

Kosten [10] and Papadaki[14] had described batch service system as follows: 1)assume that the simplest arrival process with parameter reaches the waiting system with single server; 2)the λ idle server will begin service if and

only if the number of customers who are waiting is larger than or equal to r( r ≥ 1 , is a constant); 3) the server simultaneously serve for the r customers, i.e., starting service and ending service at the same time; and 4) the service time for a customer is independent of its’ arrival interval. However, actual application system is more complex than this. For instance, 1) there maybe different kinds of services, and only those which are equal or similar to each other can be batch processed; 2) there maybe a deadline for each customer. In this competitive society, we cannot over-sacrifice the customer’s interests to achieve economical use of system resources, otherwise we will lose customers; 3) service time can be basically estimated or evaluated; 4) batch service may not exist independently. According to analysis above, this paper named the actual system with abovementioned features dynamic batch service system (DBSS) or dynamic grouping technology. The DBSS for a single server can be depicted by following parameters: 1) the simplest arrival process, i.e., the exponential distribution with mean = 1/ λ ; 2) there are m kinds of services (whose arrival process complies with some probability distribution); 3) customer service requirement has a deadline, t DL ; 4) The buffer length is B 1 ,which is used for keeping of customers which are waiting for services; 5)the processing capacity of the server is Cp, which means that the server can handle at most Cp customer requests simultaneously; 6) distribution of service time is also exponential with mean 1/ µ . To optimize, or evaluate and assess the system, we can use different kinds of factors, such as processor’s throughput η p , average waiting time Tavg , and deadline deviation rate Pus (which indicates the percentage of the customers whose service time is larger than tDL among all services). However, there is contradiction among those targets that need be optimized. For instance, the increase of η p will cause the increase of Tavg and Pus . Hence, the optimization in actual application is that when Tavg and Pus are satisfied,

ηp

should be the largest. In other words, we should

increase the probability of batch service occurrence, and at the same time the delivery time will not be affected. The setting of buffer WQue is very important. On one hand, only if we can assure the service waiting queue is long enough, then the occurrence of batch service is possible. On the other hand, too long waiting queue will prolong average waiting time. Thus, it’s a necessarily to concern about how to setup an appropriate waiting queue length, when a processor should be setup to wait, and when a processor must resume processing. To illustrate those problem, we introduce some concepts such as the length of buffer, lower bound of scheduling alarm, processor’s waiting time and so on. However, this paper won’t focus on how to concretely setup these parameters. ‹ Bl : buffer’s capacity, i.e., the maximum length of the buffer, which is subjected to physical conditions;

Q: the current length of the queue, i.e., the number of customers who are waiting in the queue; ‹ Blower: Lower bound of scheduling alarm, which is used as a factor to decide whether the waiting processor should be activated or not, i.e., when Q≤Blower a processor should wait for ∆t1 time, which represents a random time. When Q > Blower, the waiting processor can provide service immediately. Supposing the batch of service is subject to service types, the setting of Blower will be correlative to the number of service types, m. Assuming the arrival process is mean distribution, then, the condition, Blower> m, should be satisfied to ensure that there maybe same kinds of services (products) in the buffer. Therefore, m, should be the lower bound of Blower . In actual application, Blower’s value can be obtained by simulation or experiences. ‹ ∆t1 , which represents a random waiting time of processor, is correlative to Q and other characters, such as customer’s deadline. The values of these characters can be set according to practical experiences or actual situation. According to the analysis above, the optimal model of batch service and scheduling can be illustrated as Figure 1, in which, λ1 ,...,λ n indicate that the ‹

arrival process has m Bl Optimal kinds of different service λ1 scheduling types. Hence the actual Buffer Service λm algorithm for algorithm for optimizing organization batch service and scheduling should B Lower consider two problems: 1) how to group the services Fig. 1 Batch service scheduling model with single server in a buffer and choose a group to submit to a server for processing, which we just called it GSA(Grouping and Selection Algorithm; 2)the scheduling algorithm for starting up GSA ,which is called GSA-SA(GSA Schedule Algorithm). For GSA, it has nothing with workflow, i.e., GSA is not directly correlative to process control and it can be viewed as an independent agent of workflow management, so it will be ignored in this paper. However for GSA-SA, since it is correlative to process control, it is a research emphasis of this paper.

3 A workflow model for batch processing 3.1

The patterns of batch service existed in workflow

It’s shown from the actual analysis that batch service does not exist alone. They are existed in one or several steps among a production or a business process. Thus, batch service can be presented in workflow processes with three different patterns as shown in Figure 2. In the figure, we assume that there are three activities A, B, and C, in workflow W. The steps encircled by dotted lines are called BPA (Batch Processing Area), which means the activities here can be batch-processed. The words above those actions represent the agents (i.e., the entities which execute those activities). For

Agtb

example, Agtb is the executor of activity B. Supposing W has two instances, W1 and A B C S E W2, in Figure 2a, only activity B has batch a properties. In Figure 2b, although B and C must take Agtc Agtb batch requirement into consideration, they are A B C S E independent of each other, i.e., each batch activity is independent. For example, b the instance of activity B, Agtc Agtb W1.b, W2.b, will be grouped together as a new activity A B C S E instance b’, which then is submitted to Agtb for processing. After Agtb has c executed b’, there would be Fig.2 Different batch service patterns in workflow still two independent W1.b and W2.b in the output queue (namely work list) of Agtb. At agent Agtc, re-grouping should be done again. And the grouping strategies may be different. Therefore w1.b and w2.b could be grouped into different groups. The situation in Figure 2c is different from that in figure 2b. Activities B and C in W will use the same grouping strategies which means that they will be only grouped at step B once during execution in the whole BPA. They won’t resume original states until they leave off BPA. It means that if w1.b, w2.b ∈ b’, then w1.c, w2.c ∈ c’. Under these circumstances, the agent’s capability and characteristics of each step in BPA in W should be considered during grouping. So the GSA-SA algorithm for grouping optimal processing will be more complex than that in Figure 2b and Figure 2a. The same is the number of agents (servers who provide services for customers) in each step of BPA. It means that one activity instance may be executed by many participants. According to the above analysis, we can concluded that batch service model in workflow can be represented as X/Y/Z model, among which X is the number of BPAs in a workflow, Y is the maximum activities in all the BPAs in a workflow, and Z is the utmost number of servers which provide services for each BPA’s activity. Thus, there are totally eight different kinds of forms: 1/1/1/, 1/1/N, 1/M/1, 1/M/N, K/1/1, K/1/N, K/M/1, K/M/N. For example: ‹ 1/1/N , which means that there is only one BPA among which there is only one activity. But there are 1 to N agents (servers) in the BPA; ‹ 1/M/1 ,which means that there is only one BPA among which there are M (M>1)activities. However, there is only one agent (server) in the BPA. ‹ K/1/1 ,which means that there are K(K>1)BPAs and there is only one activity in each BPA. And there is only one agent (server) in each BPA.

Agtc

‹

3.2

K/M/N ,which means that there are K ( K>1 ) BPAs and there maybe M (M>1)activities in each BPA. And there maybe N(N>1) agents (servers) in each BPA. A workflow model supporting Batch service

Batch service has put Batch activity type forward new demands for workflow model and WT1 A B E S WfMS. Because batch service here is dynamic Batch activity and it asks that activity sub-process instances be grouped into Agtb Agtc a new activity which then is submitted to an WTk B1 B2 E S executor for processing, we must consider the Fig.3 Definition of batch activity type, batch activity problem about grouping sub-procedure and their relationship method. All these should be depicted in a workflow model. To support batch service in workflow, this paper introduces a new activity type, BPAT (Batch Processing Activity Type), into workflow models together with BPA concept. We also introduced a concept called batch processing sub-procedure, which means that if there is a BPAT, there must be a relevant batch processing sub-procedure. BPAT describes the optimization of grouping activity instances as well as the methods, strategies of scheduling. BPAT is defined as follows: Definition 1: BPAT can be represented as a five-tuple, BPAT=, in which GSA-SA and GSA are the GSA-SA and GSA algorithms which have been introduced above and GSA-SA will be analyzed in detail later. BLower and Bl are parameters related to buffers being introduced before. GC(Grouping Characteristic) is the classifying characteristic for grouping the activity instances. By adding a batch activity type and a batch sub-process, a workflow model can support the definition of batch service. For example, the workflow type in Figure 1b could have a new type of definition which is shown in Figure 3. When defining a workflow process, we need just to setup GSA, GSA-SA, relevant parameters in buffer and characteristics of grouping and classifying in BPATs, and let workflow enactment service to do the rest things automatically. 4 A workflow enactment service for batch processing The workflow enactment software interprets the process definition, controls the instantiation of processes and sequence of activities, adds work items to user work lists, and invokes application tools. It is done through one or more co-operating workflow management engines, which manage the execution of individual instances of the various processes [13]. Event-condition-action (ECA) rules have been

advocated by database practitioners as a powerful mechanism to transform passive data repositories into active ones. The rules make the data repositories react to internal or external events and trigger a chain of activities that include notifying users and applications or performing database updates [4]. An ECA rule has the following form: Rule (rule_name) ON event_name WITH condition_exepression DO actionSet EndRule It means that when a simple or composite event occurs and the condition expression holds, a set of actions or operations will be executed. An ECA rule-based system can support ad-hoc, adaptive, flexible and dynamic workflows that are modifiable at run-time. This allows the system designer to modify workflows as the requirements change. Thus, most workflow engines use ECA(Event-Condition-Action) rule based method to control the execution sequence of activity instance in workflow process [3,4]. In our system, we also use ECA rules as the internal control logic for activities. Although having introduced batch service, there is not too Exit BPA many changes Being in BPA Before entering required in the BPA definition level of b 1’ b1 b1 workflow model. b b3 2 However, we must b 2’ b b4 enhance the 3 B l =5 b4 b2 workflow ... enactment b5 b5 services, especially b6 b6 the ECA rule ... ... based workflow engine in order to implement batch Fig. 4 changes of activity instances arrival sequence service outside and inside BPA automatically. The following is the detailed analysis of workflow execution system. 4.1

New requirements for workflow engines

Batch service will group similar activity instances in the task list together according to their characteristics dynamically. It will change the processing sequence of activity instances within a task list. In summary, the changes to workflow enactment service when considering batch service are mainly as follows: 1. After grouping process, activity instances which have the same characteristics in a BPA, would be replaced by a newly built activity instance. That means within batch processing sub-process, the activity instances seen by participants are different from the activity instances which have exited from batch processing. For instance, Figure 4

depicts the difference. Assume that there are activity instances which arrives continuously and that b1, b3, b4 have same characteristics, then after the instances entering BPA, they will be grouped together as a new activity instance b1’. If there are many activity instances in a BPA, they will be grouped according to the same rule, which in practice means that the system will generate a new workflow instance, namely the instance of batch sub-workflow. And after the instances exited from BPA, they would be separated again. At that time, however, the arrival sequence of these activity instances has changed. For example, b2 is behind b4. 2. There must be a special component that is responsible for grouping and separating activity instances, as well as instantiating grouping sub-process (subworkflow). This is due to the instantiation of batch sub-process is different from common sub-process instantiation. The instantiation of common sub-process and its paternal process is one-to-one correspondence, but the instantiation of batch subprocess and its paternal process is one-to-many correspondence which namely means that a batch sub-process can be correspondent with many paternal type instantiation. 3. After having introduced batch New grouping activity processing, activities’ Arrival Event status management should Sta r t S be changed, since different process activities Idle Busy have been grouped and Exit temporarily generate a E Service completed Event new inner sub-process instance. At the same time, the original subFig.5 State transition diagram process instance is paused. of an agent Hence, to ensure that the monitoring and querying the state of original sub-process instances or activity instance, we must map the status of activities in the newly temporary process instance to that of the original activity instances. For example, in an enterprise manufacturing system driven by customers’ order, what a customer cares most is which step his or her orders are at now. He or she doesn’t care how the system works. 4.2 Design and implementation of the scheduling engine for batch service From the analysis above, the complexity of scheduling algorithm for each batch model is different. For K/M/N model, the optimal scheduling itself is a very complex problem for researchers, it’s a NP hard problem, and the research about it is not the emphasis of this paper. Here we are only interested in how to support the implementation of batch service in WfMS. To show it’s feasible by extending the traditional workflow enactment system, we take the implementation of an optimal scheduling algorithm, which is for a single service organization, X/X/1, in workflow engine as an example.

EndG: End of grouping activity instances SA: Service Arrival EndG SA Lower

SA EndG Start

S

EndG

SA

Empty

Full

Exit

EndG SA Normal

E

EndG

SA

Fig.6 State transition diagram of Buffers

4.2.1 Study on batch service’s GSA-SA algorithm for X/X/1 model According to the analysis above, it is certain that GSA-SA algorithm should consider the status information which is from service organizations and buffer queue. Hence we must firstly analyze the status transition diagram of buffer queue and a service organization (namely an activity executor or agent) which is respectively shown in figure 5 and figure 6. From the figures, we can see that a status’s transition is triggered by events and that shows a typical finite state machine model driven by event. Figure 5 depicts a state transition diagram of a service organization. It has only two states which are Busy and Idle. State Busy shows that a service organization is working (providing a service for someone), and Idle shows that a service has been over and the service organization is waiting for the arrival of a new service. Figure 6 depicts a state transition diagram of the buffer. There are four states in the figure, Lower Event

Normal Event IS2

Start

Lower Event Empty Event E

GS

End_Waiting_1 Event

IS1 Exit Event

S

IS3

Normal Event

Processor Idle Event

Processor Busy Event Busy

Fig.7 State transition of GSA-SA algrithm

namely, Empty, Lower, Normal and Full, which respectively represent that the buffer is empty, lower than alarm’s lower bound, normal and full. Lower is related with BLower , which is shown in section 2. The state transition in Figure 6 can be explained as: when the machine boots and the buffer is empty, it will stay at the Empty state; though there are services coming continuously, the length Q of the waiting queue is less than or equal to BLower , the machine is still at the Lower state; If BLowerBLower When Q≥Bl When

∆t1

is over

When

∆t2

is over

Indicating that there is a new service arriving at the waiting queue

According to these events and the foregoing analysis of GSA-SA algorithm, we can use four ECA rules shown in Table 2 to implement a GSA-SA algorithm under the circumstances of single service organization. In these ECA rules , SetTimer(), GPAlgorithm() , ResetECARule(), ClearEvent(), OpenBufferMonitor(), CloseBufferMonitor() are all inner functions of a workflow execution service. The SetTimer(Para1,Para2) is for timer setting, in which parameter Para1 represents the Timer Event, which will be generated after waiting for a time period represented in parameter, Para2. The ClearTimer(Para) is for clearing of the timer been set. The GPAlgorithm(Para) calls GSA algorithm, and the parameter Para tells us to choose a GSA algorithm. The ResetECARule() resets those ECA rules used to implement GSASA, and so the next scheduling cycle could work. The ClearEvent(Para) clears all events which are in an event queue and are relative to the BPA, Para . The OpenBufferMonitor(Para) allows the engine to monitor the buffer events (such as Lower, Normal and so on) generated from BPA, Para. The CloseBufferMonitor(Para) is the opposite to OpenBufferMonitor(), which forbid the monitoring of buffer events pertaining to BPA Para. It means that there will be no buffer events occurred. To implement the system, an interpreter for ECA rules is needed, which are detailed in the following section. 2) Implementation

Traditional workflow execution systems don’t support batch service, so we must extend it. Here, a batch service scheduling engine, which is responsible for scheduling batch service, is added to workflow enactment service. Figure 8 depicts the architecture of batch service scheduling engine and its relationship with other parts in workflow enactment service. From Figure 8 we can see that batch service scheduling engine is composed of an ECA rule interpreter, a set of embedded GSA algorithm components which provide many optional group scheduling algorithms, a buffer manager, an ECA rules table, an event queue and a set of buffers. Each buffer represents a BPA. The workflow engine will continuously generate activity instances Table 2. ECA rules for the implementation of GSA-SA algorithm under the circumstances of single service organization Group activityID

BPA01

rule ID

ECA rules

Rule 1

ON Event Idle_Processor DO Action {ClearTimer (End_Waiting_1); ClearEvent(BPA01); OpenBufferMonitor(BPA01); ResetECARule(BPA01);}

Rule 2

ON Event Idle_Processor AND End_Waiting_1 DO Action {CloseBufferMonitor(BPA01); GPAlgorithm(BPMethod); ResetECARule(BPA01);}

Rule 3

ON Event Idle_Processor AND Normal DO Action { CloseBufferMonitor(GPA01); GPAlgorithm(BPMethod); ResetECARule(GPA01);}

Rule 4

ON Event Idle_Processor AND Lower DO Action SetTimer(End_Waiting_1,

∆t1 )

and send them to buffer manager. Then, the buffer manager will send them to their relevant BPAs’ buffer queue. At the same time, the buffer manager also needs to monitor each BPA buffer’s status. When its status is changing, some relevant events (according to the rules defined in Table 1) should be generated. An executor (service organization) will generate Busy event or Idle event and send it to the event queue. ECA rule interpreter is responsible for the explanation execution. When the event and condition part of an ECA rule is satisfied, it would be interpreted and executed. Using this mechanism, a single ECA rule interpreter can be responsible for all the BPA scheduling work. Moreover, the implementation is very convenient. For example, if a GSA-SA algorithm should be modified, we need just modify the relevant rules in the ECA rule table instead of recoding the system. Another benefits is if a workflow engine is based on ECA rules, the two can be integrated together to simplify the implementation of workflow enactment service.

C

Workflow Enactment Service Batch Service Scheduling Engine

Timer

ECA Rules

Event Queue

External Application

Embeded GSA Component

ECA Rule Intepreter

Buffer Manager

BPA1 Buffer

Actor 1

BPA2 Buffer

BPAn Buffer

Actor 2

Other Data Other Component

Workflow Engine

Actor n

Figure 8 The Architecture of Batch Service Scheduling Engine

3 Reasoning calculus An ECA rule interpreter is the core of the scheduling engine. To implementing it, we use an explanation algorithm based on modular resolution, through which we get a remainder. It namely means that firstly we get an event from an event queue and then do a modular resolution of all the ECA rule event parts which are corresponding to the event. For instance, for the Rule 2 in Table 2, after the event Idle_Processor has occurred, its event part could be from “ON Event Idle_Processor AND End_Waiting_1” to “ON Event End_Waiting_1” after the modular resolution. Lastly we should judge the ECA rules one by one to determine whether it is satisfied. If an ECA rule’s event expression is empty after doing a modular resolution, it shows that the event rule is satisfied and its corresponding actions will be triggered. The detailed implementation theory can be seen in the literature [Goh01]. Under this circumstance, after one occurrence of the scheduling algorithm, we must resume the ECA rule for the next scheduling cycle. An inner function ResetECARule() for resetting ECA rules is provided in the system. The Idle_Processor event will be a start point from which a new batch scheduling procedure starts (since only when a service organization is idle, it is necessary to start a new scheduling). But before a new batch scheduling procedure starts, we must clear the events and timers which are relative to the last grouping scheduling. If we don’t clear them there may lead to some confusion. For example, the execution process of a grouping scheduling procedure is: 1) firstly Rule 4 is satisfied and it will setup a timer event End_Waiting_1; 2)at the time when End_Waiting_1 doesn’t arrive but the Normal event has arrived, Rule 3 is satisfied and a grouping scheduling is generated. The group will be submitted to a service organization for processing. And all the ECA rules relative to the scheduling algorithm are reset; 3) however, at this time,

End_Waiting_1 occurred; 4) then Idle_Processor occurred when Rule 2 was satisfied and a grouping scheduling occurred, which then leads to an error since at that time Q is less than or equal to BLower. Since the last scheduling procedure’s setting has not been cleared, End_Waiting_1 event was occurred. Hence we add a rule Rule 1, which means when an Idle_Processor event arrives, we should do an initialization work which will clear all timers and events relative to the BPA, start buffer monitoring and reset ECA rules, since only when an Idle_Processor event arrived (which means that a service organization is idle), a new batch scheduling procedure would be started. And only at that time the buffer’s status is significant. So before doing a grouping scheduling, CloseBufferMonitor() should be called to forbid a buffer monitoring(see Rule 2 and Rule 3), and after a service organization is idle then re-starts the buffer monitoring. 4.3 The mapping of activity instances’ state The batch activity type is a kind of special activity and when it executes, it is implemented in the form of a sub-procedure instance of workflow. Hence the subprocedure is instantiated by a group manager and it is taken as a new procedure instance waiting for the explanation execution of workflow engine. At the same time there maybe many activities exist in a batch sub-procedure, so we must build a mapping between the state of an actual activity instance and that of the related virtual one. This mapping function should also be one part of the batch service engine. Taking figure 2b as a sample, assume that nwtk1 is the instance of a new inner batch workflow sub-type which has been grouped. And the original virtual subprocess instances are wtk1, ..., wtkn (because they don’t exist in workflow system, they are virtual ), which means that nwtk1 is composed of wtk1, ..., wtkn . That is nwtk1={wtk1, ..., wtkn}. So the mapping relationship of the states can be shown as the formula 1: state( nwtk 1 .a ) = s ⇒ ∀ wtki ∈nwtk 1 state( wtki .a ) = s (1) The formula shows that for any activity instance a of nwtk1, if its state is s then the state of wtki’s corresponding activity a is s. Through this mapping relationship, we can maintain the consistency of the states of each activity instance within and out of BPA, ensure the system’s transparency for customers. Therefore, we can ensure the favoring working of monitoring and querying some workflow instance.

6 Conclusion and further study As our government is advocating an economical society for saving resources such as electricity, gas and etc., every enterprise and every citizen are under an obligation to use new technology or to adjust their way of work and life a little to reach the grand goal brick by brick. Batch service is one kind of method for this goal. It has been used in many places such as chemistry and steel industry. And in business domain, it also has broad application background.

In practice, batch service commonly exists as one or several steps of a production or business process. As a powerful tool to support business process modeling and execution, WfMS has gained extensive attention from industry and academic world. However, it is a pity that the traditional WfMSs don’t concern this problem. This paper first introduces the concept of batch processing and proposes a simple theoretical module. Through deep analysis, it, then, summarizes batch service in workflow as eight different X/Y/Z forms, in which X represents the number of BPAs in one process, Y is the number of activities in a BPA, and Z is the utmost amount of service organizations which provide services in a BPA. The new problems and issues in WfMS after taking batch service into consideration are analyzed, too. Through adding a new activity type in workflow model and adding some new features in workflow enactment service, we can support batch processing easily. As traditional workflow aims at separating and abstracting business processes from the software systems in an enterprise to avoid hard-coding business process related aspects, the contribution of this paper is that it further separates the batch service control logic from software function logic. However, in this paper, we only implemented a WfMS which supports batch service of K/1/1 pattern. There are still many works needed to be further solved, developed and promoted, such as setting each optimal parameter, implementing optimal scheduling algorithm in K/M/N pattern, and so on.

Acknowledgement This work was supported by National 973 Plan under grant no: 2003CB317007 and Youth Natural Science Foundation of the Education Department of Hunan province under the grand no: 04B022.

References 1. 2. 3. 4. 5. 6.

Bacot J.B. and Dshalalow J.H. A bulk input queuing system with batched gated services and multiple vocation policy. Mathematical and computer modeling. Vol.31(7,8), 2001:873-886 Ben-Arieh D.,Sreenivasan R. Information analysis in a distributed dynamic group technology method. International Journal of Production Economics. Vol.60-61, 1999:427-432 Geppert A., Tombros D. and Dittrich K. R. 1998. Defining the semantics of reactive components in event-driven workflow execution with event histories, Information Systems, 23( 3-4):235-252 Goh A, Koh Y. K. and Domazet D. S. 2001. ECA rule-based support for workflows. Artificial Intelligence in Engineering, 15(1): 37-46 Hans W. and van den Heuvel W.J. (2002). Cross-organizational workflow integration using contracts. Decision Support Systems. 33. 247– 265 Hollingsworth D. The Workflow Reference Model: 10 Years On. In: Workflow Handbook 2004. http://www.wfmc.org. 2004.2

7.

Hu J.M. and Grefen P. Conceptual framework and architecture for service mediating workflow management. Information & Software Technology. Vol.45(13), 2003: 929-939 8. Hwang, S.Y. and Tang, J. Consulting past exceptions to facilitate workflow exception handling. Decision Support Systems 37 (2004) 49– 69 9. Kradolfer, M. (2000). A Workflow Metamodel Supporting Dynamic, Reuse-based Model Evolution. [PhD thesis], Department of Information Technology, University of Zurich, Switzerland, May, 2000. www.ifi.unizh.ch/ifiadmin/staff/rofrei/Dissertationen/Jahr_2000/thesis_kradolfer.pdf, 2000 10. Kosten, L., 1973. Stochastic Theory of Service Systems. Pergamon Press, New York. 11. Lee C.-Y., Lei L. and Pinedo M. Current trend in deterministic scheduling[J]. Annals of Operations Research, 1997,70: 1-41. 12. Liu J.X., Yao Y.X. and Tang X.H. Pre-dispatching of tasks in workflow: the concept, model and implementation. Chinese Journal of Electronics, Vol.13(2), 2004 13. Liu J.X. Zhang S.S. and Hu J.M. A Case Study of an Inter-Enterprise WorkflowSupported Supply Chain Management System. Information and Management. Vol.42(3),2005:441-454 14. Papadaki K.P. and Powell W.B. Exploiting structure in adaptive dynamic programming algorithms for a stochastic batch service problem. European Journal of Operational Research. Vol.142(1),2002:108-127 15. Mohan,C. Recent Trends in Workflow Management Products, Standards and Research. Proc. NATO Advanced Study Institute(ASI) on Workflow Management Systems and Interoperability, Istanbaul, August, 1997. http://www.almaden.ibm.com/u/mohan. 16. Potts C. N. and Kovalyov M. Y. Scheduling with batching: A review. European Journal of Operational Research. Vol.120(2),2000: 228-249. 17. Stohr E.A. and Zhao J.L. Workflow Automation: Overview and Research Issues. Information Systems Frontiers 3:3, 281–296, 2001 18. Strusevich V.A. Group technology approach to the open shop scheduling problem with batch setup times. Operations Research Letters. Vol.26(4), 2000:181-192 19. WfMC (1995). Workflow Management Coalition. Workflow Reference Model. http://www.wfmc.org/standards/docs. Jan 1995. 20. WfMC (1996). Workflow Terminology & Glossary. http://www.wfmc.org/standards/docs. 1996. 21. Zhuge H. Component-based workflow systems development. Decision Support Systems. Vol.35(4),2003:517-536 22. Zhuge H., Workflow-based cognitive flow management for distributed team cooperation. Information and Management. Vol.40 (5), 2003:419-429

Suggest Documents