Workflow Verification: a New Tower of Babel Sergio Giro Universit´e Aix-Marseille III LSIS Avenue Escadrille Normandie-Niemen 13397 MARSEILLE CEDEX 20 FaMAF - UNC - CONICET Haya de la Torre S/N C´ordoba Argentina
[email protected]
Abstract— In this paper, we show the relevance of workflow model checking and discuss the problems that arise when doing research about workflows. Then, different facts that may cause these problems are proposed. Past research in the field is summarized, and the conclusion sketches some future lines of research based in the previous analysis. Index Terms— workflow, verification, state-of-the-art.
I. I NTRODUCTION Workflows are proven to be an interesting conceptualization of the orchestration of services. As a consequence, workflows are used in a great variety of commercial tools supporting Service Oriented Architectures. In this context, the formal verification of workflows is useful to improve the general architecture of the information systems, since the formal verification reveals drawbacks in the proposed architecture which are difficult to detect. Such a drawbacks are revealed because the properties specified in the first time are likely to be inadequate or insufficient to achieve the intended goals. In addition, opposite interests may be exhibited from the customers or the developers (e. g. , while some department in the customer enterprise may want to make a less risky investment by offering an acceptable service, other department may want to do a more risky investment and offer the best possible service). Thus, by proving properties under different assumptions, we can provide different options allowing to perform choices with more concrete information. Moreover, the experience shows that, while the formal specification is performed, properties that seemed to be very clear are ambiguous in certain cases not taken into account before, and that there are new properties that must be added to satisfy the requirements. In conclusion, the acquisition of knowledge about the system is greatly benefited from the formal verification. Taking into account that the changes in the specification once the development has began are likely to result inconvenient (mainly because great part of the work must be modified), the benefits obtained during the specification of requirements may result in an overall improvement of the development of the system. In addition, a clear specification
allows to the different actors in the production of the system (customers, providers, developers) to show that they performed the required work correctly, thus resulting in a better control of the responsibilities involved. Therefore, proving properties of the system in the beginning of the development process may result in long-term benefits, although this verification delays the beginning of more “concrete” development tasks as programming. Once we have a set of components committed to offer services under certain constraints, we can link them to achieve the desired goal (e. g. production of an item in less than one minute). The different tasks performed toward the goal use the services provided by the components. Workflows are always composed of tasks, which enable other tasks with its production, until the production process ends. If a task T enables some tasks {Ti } when it finishes (e. g. because of the resources provided, or the verification of permissions), then each Ti is a outgoing task from T . Conversely, each Ti in the set of the tasks needed to perform a task T is called an incoming task. Thinking in terms of workflows is helpful by many reasons: • Workflows can be modeled to prove properties. • The model can also be studied to improve efficiency (e. g. , detecting bottlenecks). • The workflow model describe precisely the way in which the different components interact. We think that the formal verification of workflows is not only a very useful topic, as explained above, but also a topic which is likely to become widespread, given the current efforts to isolate the logic of interaction of the components from the rest of the system (in the same way in which graphical interface and database management were isolated in separated applications [1]). This is an ambitious aim, since it is equivalent to split the “macro-logic” of the system (which may rely in a special “meta-component”, as the Enterprise Service Bus in a Service Oriented Architecture), from the underlying “micro-logics” of each component. But the macrologic may be complex, and very dependent of the “micro-
logics” involved. In addition, the issues of fault-tolerance, exceptions, efficiency, etc., are closely tied to the macro-logic, and each one of them can be considered in the properties proposed for the system. II. W ORKFLOW I SSUES A. Introduction During the research about workflows, we must face problems in two different aspects: the problems about the properties of the workflows (“Does the workflow exhibit the correct behavior?”, “Does it efficiently?”) and the problems associated to the research in this topic (“What formalism for workflows should I choose to prove satisfiability of temporal constraints?”, “Should I define a new formalism?”).
•
B. Problems Associated to the Research in Workflow Verification The research in workflow verification is far from have a consensus in many aspects. There are no well-known, foundational and universally accepted formalisms for workflow verification, as finite state automata or Turing machines are well-known, foundational and universally accepted formalisms to study theory of automata. The formalisms more likely to be used are π-calculus ( [2], [3]) and Petri nets (a comparison between them can be found in [4]). Many researchers define their own formalism to prove a particular property (see [5], [6], [7]): in the project C3DS, many formalisms were used to deal with workflows (see [8], [9], [10]). In this sense, the preliminary study leading to YAWL [11] is very interesting, since YAWL was designed to be a synthesis of the existing workflow languages. The research about YAWL does not involve analysis-related issues yet: only one theorem has been proven (it is relative to soundness and can be found in [11]) and one algorithm is presented in [12], to check when an ORjoin is enabled (it is a very elemental result). This lack of consensus raises the following problems: • There are a lot of things to consider about a workflow (and the different publications often address only one of them): – Temporal constraints: the tasks may restrict themselves to perform the task in a given time and to not perform the task for more time that a given constant (as an example, a storage task provided by a thirdparty can have a temporal limit). The problems in [13] are related to these constraints. – Fault tolerance: we can specify what happens if one of the components cannot perform the service properly, or if one of the elements needed is damaged. – Efficiency: it is natural to ask if the same work can be performed more efficiently. It is quite related to temporal constraints. These issues are analyzed in [13]. – Correctness: in general, the workflows model reactive systems, and the usual verifications address the problem of determining if the system will react to an extern event and, if so, if the system reacts in
•
•
a proper way (issues concerning liveness, fairness, deadlock-avoidance, etc.) This kind of properties are studied in [10] using a process algebra, with the obvious advantage that process algebras are tools used since many years ago for this kind of verifications. In [6] we can found the complexity of some problems related to correctness. – Change tolerance: once the workflow is active, it may require changes. Here, we face the problem of determining what happens with the elements that are being processed. This problem is addressed in [5]. Slow growing of background knowledge in the field: in each publication a different formalism is proposed. It would be very helpful if we had tools as standard as graphs or automata to study mathematical problems about workflows. It is difficult to find a formalism including information about all the aspects needed (temporal constraints, capacity, behavior under unexpected circumstances) On the other hand, the excess of generality may result in undecidability or high complexity (see [6]) and, therefore, a new formalism must be proposed, with the consequent lack of support and background information. The publications are not always focused on computer science: the study of the workflows is not related only to computer science, but also to management-related areas. Therefore, many publications are oriented toward these aspects, with little or no treatment of the workflow as a matter of study in computer science. The information is, then, dispersed among many publications, and it is difficult to know in advance if some research is formal or useful, or what their justifications are. The expressiveness of the formalism may be undesirable: in a very expressive formalism it is hard to prove any property (e. g. it cannot be decided if a Turing machine stops). On the other hand, a very elemental formalism may not suffice to cope with real life workflows, or with a considerable part of the problems related to workflows, which are detailed in the next chapter.
C. Problems Associated to Workflows The properties that can be studied are oriented toward different aspects: temporal issues, fault-tolerance, capacity restrictions, etc. After a study of the research until now, the problems more likely to appear are the following: • Desired behavior. It is related to the structure given to components and to the communications between them, as well as to the assessment of the property of properly react to external events. • It must be verified that all the conditions imposed to the interaction between the components must hold at the same time, i.e., that the workflow can exist in the real world. • It is possible to ask if the conditions are too strict, or too permissive. When dealing with temporal restrictions, this kind of problems lead to resources that are ready to perform a task (and therefore wasting time, or energy,
•
•
•
•
etc.) when we can know in advance that the task they perform will not be invocated (by means of temporal constraints). It is useful to know what happens when an element (a message, an electronic component, etc.) which is damaged or is a faulty one enters the workflow. The same holds in the case of tasks not performed properly. Assessment of the correctness of the elements being processed when a working workflow is changed. Possibility of changing the normal behavior of the workflow in order to obtain a particular instance.
In the different approaches to workflows, one or more of this problems are studied. III. D IFFERENT A PPROACHES A. The C3DS Project The C3DS (Control and Coordination of Complex Distributed Services) project uses workflows to model the support of complex services provided by distributed components. Different technologies are used to consider the whole service life-cycle (specification, analysis, development and management) : workflow analysis, Message-Oriented Middleware (MOM)/Agents Technology, and Architecture description language based development environment. Using these technologies, the project developed an advanced services execution environment, the Task Control and Coordination Service (TCCS), mixing workflow and MOM/Agent techniques with graphic tools based in Architectural description languages (ADL) tools. The service provisioning platform and the development environment are described as: •
• • •
Usable (easiness of use toward the use of graphical interfaces). Open (the used platforms are standard). Dependable (provides a reliable and available service). Scalable (in the underlying platforms no centralised services are required).
1) Project Contributions: Some of the contributions of this large project are detailed below. •
•
Darwin: architecture description language (for examples of its use see [8]). It supports quite complex data structures. Also, it supports comprehensive descriptions as forall and when, to allow the general specification of components and then specialize them. It is based, mainly, in the declaration of component types, the instantiation and the links between the different instances. Use of temporal logics [9]: using a first-order logic extended with temporal operators (eventually, in the past, etc.) properties as dependability, safety, availability, reliability, etc. are expressed. Also, different approaches are proposed to use these properties, as well as to integrate them to development environments. An example of these
properties is: Dependability(S) ≡ ([σ] Faulty(σ)) ⇒ ∃σ ′ ∈ Σ : [σ] < [σ ′ ]
•
•
This property says that dependability holds in the case that, whenever the system is a state σ (this is denoted by [σ]) and σ is a faulty state, there exist a valid state σ ′ (a state is valid if it belongs to the specification Σ) such that the system can reach this state from σ (this is denoted by [σ] < [σ ′ ] ). Systematic Synthesis of Transactional Middleware [14]: the middleware synthesis is based in the formal specification and in the generation of stub code. Process Algebra utilization [10]: every component is specified as a process, and in this way interesting properties as deadlock-inexistence, liveness, etc.
B. Semantics of Reactive Components in Event-Driven Workflow Execution [7] This paper is based in the BROKER/SERVICES Model (B/SM). This model is proposed for architectures composed of: • Components: representing entities that take part in the workflow. • Tasks: that can be performed by those components. • Rules: determining the conditions and the way in which the components can perform the tasks. A service is specified by a signature consisting in: • The name of the service • A set of typed parameters • The possible replies and exceptions it may cause A broker is represented in terms of ECA (event - condition - action) rules, that specify what happen when an event occurs and a condition holds. An example of an ECA rule borrowed from the original publication is: ON request (accept_case,mail) IF(HIC = create_claim(mail)) AND HIC.amount > 300 DO request (control_blacklist, HIC.treatment) request (control_treatment, HIC.treatment, HIC.diagnosis)
This rule corresponds to a broker in an insurance company, and specifies that when a claim enters, if a register for the claim can be created, and if the amount is greater than 300, a request for the relevant controls must be performed. Different types of events are defined and different compositions of them. The types of events are: • Broker interaction events: – REQ(name,plist): request of service. – CFM(name): confirmation of request. – RPL(name,plist): reply. – EXC(name,ename): exception event type (ename: name of the exception) • TET: temporal event. The constructions to compose them are as follow:
•
• •
•
•
CON(ET1, ET2, sw): event conjunction (sw is a possible restriction for the events to occur in the same workflow, since this approach is multi-levelled). This event occurs when an event of type ET1 and an event of type ET2 happen, independently of the order of occurrence. DEX(ET1,ET2,sw): event sequence. CCR(ET1,ET2,sw) : event concurrence (that does not mean, of course, that the events occur exactly at the same time, but they are expected to comply certain restrictions as to be considered concurrent). NEG(ET1,(ET2,ET3,sw),sw) : it occurs when an event of type ET1 does not occurs in an interval defined by an event of type ET2 and an event of type ET3. REP(ET1,times,sw): it occurs when ET1 occurs a predefined number of times.
Then, a semantic is given for the broker, in which it is considered as “black-box” expressed in terms of the events it generates. C. Temporal Reasoning in Workflow Systems [13] The problems addressed in this paper are related to workflows in which temporal distances between tasks that must execute sequentially are bounded (i.e. the temporal distance between the executions of two task must be between m and n). In this approach it is supposed that every case arising from the temporal constraints in the workflow can be isolated in order to study it separately. Then, the techniques deal with workflows without conditions, and the tasks to perform are defined in advance. Each one of these cases is called a constraint network. The addressed problems, among others, are: •
•
•
Consistency: can all the tasks be performed according to the restrictions? When the granularity of all the temporal distances is the same in each case (e.g., there is not a distance expressed in days and other in months for the same pair of tasks) the problem can be easily solved in time O(n3 ) toward the all-pairs-shortest-path Floyd- Warshall algorithm. This algorithm is called the consistency algorithm, when it is used to solve the problems that follow. Free scheduling: it is useful to find a schedule for the execution of the tasks in which the only restriction is given to the beginning of the activities, while the duration constraint is the original one in the network. An algorithm of polynomial order is showed to find one, if it exists. This scheduling is not necessarily maximal: the general schedules do not establish a moment for the beginning of each task, but an interval in which the task can begin. The schedules generated by this algorithm impose an interval of only one moment for the beginning of the task: an algorithm is given to extend this intervals and find the earliest maximal free schedule. If no free schedule exists, an algorithm in polynomial time is presented to find a schedule that complies with
the minimum duration of each task, as imposed by the network. This paper makes hardly use of the results in [15]. D. A Formal Framework for Workflow Type and Instance Changes under Correctness Constraints [5] One of the most interesting particularities of this approach is that the state of every task is described. It can be: • Executing: the task has taken every necessary resource and did not generate new resources yet. • Completed: the task has taken every necessary resource and generated new resources. • Skipped: it was not necessary to execute the task. This state is possible since the control edges (representing resources) can be signalled as true or false. If every control edge is signalled al false, the task is not needed. • Activated: the task has every necessary resource available, but it did not begin to execute yet. A special graph for workflow is defined, called a schema. This graph have control arcs that represent the situation in which a task generates a resource that other task needs to perform its operation; it have also cycle edges, to specify that the cycle in the control is desired (undesired cycles may lead to problems) and data edges, to specify that a task reads or writes in some data element. In addition, there are conditions that restrict the possible transitions. Then, graph editing operations are defined (to add control edges or activities, to delete control edged or activities, etc.) and the following problems arise: • Can a change be correctly propagated without errors or inconsistencies for an instance? (If so, the instance I is said to be complaint the new schema.) ′ • Assuming instance I is compliant with S , how can we ′ smoothly migrate I to S such that its further execution can be based on S’? Which state/marking adaptations become necessary in this context? Based in these problems, an axiom is stated for dynamic change correctness. To establish this axiom, the history of the workflow is limited to the last execution of each cycle (reduced history), in such a way that different executions of the workflow become equivalents. Basically, this axiom states that an instance I is compliant with a new schema S ′ if and only if the reduced history can be replayed in S ′ . It also says that, assuming that I is complaint with S ′ , to obtain the new state of the activities, it suffices to replay the reduced history in S ′ . Then, a theorem is stated (the proof is not in the paper) about necessary and sufficient conditions for every graph editing operation to ensure correctness if the change is applied. The conditions are very complex, probably because of the complex structure of the graph. Given that the replay of the reduced history may be a complex task, an algorithm is given to calculate the new marks, avoiding unnecessary operations. The need for biased instances is showed for executions that must be performed in a different way to that originally defined
by the workflow schema. The biased instances are defined as instances of a schema S + ∆I , where ∆I is the set of changes applied to S in order to obtain a biased instance, instead of the one that S would generate. This need raise, also, the problem of determining if some propagation can be done to a biased instance: • May a change ∆T be propagated to a biased instance I, even if the current execution schema SI = S + ∆I for I differs from S? • If change propagation is possible, how can it be efficiently and correctly accomplished? Which execution schema SI′ ′ (and marking M SI ) must result? A change ∆T is defined axiomatically as capable of being propagated to a biased instance if these two conditions hold: • The change corresponding to the change of the schema itself must be possible once the change necessary for the biased instance has been done (e.g., it is not possible for the change of the schema to add a control edge to a task deleted as a consequence the change performed in the workflow to produce the biased instance). • The reduced history of the biased instance I can be produced in the schema in which the two editions were performed (the final schema) and the states of the tasks are determined replaying the reduced history in the final schema. Then, a sufficient condition is given to prove deadlock avoidance when a change of schema is propagated to a biased instance. As a last contribution, the paper presents a theorem stating that both the changes corresponding to a biased instance and to a change of schema can be performed independently of the order. E. Verification Problems in Conceptual Workflow Specifications In this approach, the tasks may be defined in terms of other tasks, and even recursively. The graph proposed to model workflows is quite simple, having nodes corresponding to decisions (which enable only one of the outgoing tasks) and synchronizers that need all their ingoing tasks completed before they can enable the outgoing tasks. Figure 1 shows an example of workflow from this paper. Here, the circles represent conditions. After C, either D or E (but not both) may execute. The triangles represent synchronizers: H can execute only after C and G. The condition that between D and F, is called terminating because it is possible that no task gets enabled after it (the condition choose the termination – indicated with a bar). The A task is considered to be complete when every execution tree has completed. This is an implicit synchronization. Some problems related to workflows are presented, and the complexity is then analyzed: • The initialization problem: there exists a sequence of events leading to the execution of a task? This problem is NP-complete. Here is supposed that a condition may hold
A B C G D
E H
F Fig. 1.
Workflow example from [6]
or not depending on other condition. So, the condition to execute a task is a conjunction of disjunctions, and asking for its satisfiability is well known NP-complete problem. • The termination problem: can a workflow reach a terminal state? This problem is proven to be DSPACE(exp)-hard. • A workflow is said to be safe if and only if from every reachable state it is possible to reach a terminal state. Determining if a workflow is safe is DSPACE(exp)-hard problem. • It is useful to study how many elements may reach a task at the same time (particularly, because of the capacity constraints). A task in a workflow is said to be n- bounded if, for every reachable workflow state, no more than n elements are handled by the task at the same time. Determining if an object is n-bounded is DSPACE(poly)complete problem. • The problem of workflow equivalence is undecidable. Particularly, is not possible to tell if a workflow specification is more generic than another workflow specification. Then, workflows are restricted in order to decrement the complexity of the problems. The restrictions are: • Only the implicit synchronisation (see above) is allowed. • No cyclic structures are allowed (recursive task definitions). Taking into account these restrictions, the termination problem can be solved in polynomial time. F. YAWL [11] The preliminary work leading to YAWL included a revision of the features of the available workflow languages. The aim in the design of YAWL was to make a language suitable for writing most workflows. The YAWL designers carefully distinguished between suitability and expressiveness, since the expressiveness (as understood in language theory) is very easy to achieve, but a completely expressive language cannot ensure that the workflows are expressed in a way such that the specifications are clear and precise. In order to ensure a good trade-off between suitability and expressiveness, all the workflow patterns (workflow behaviors commonly found in practice) are considered, and the language
is proven to be sufficiently expressive to specify all the (different combinations of) patterns. The patterns include, but are not limited to, the following behaviors: •
•
•
Cancellation: a task or a set of tasks may be cancelled during its execution, Multiple instances: multiple instances of a process may be launched at request to run concurrently, Temporal issues: it is possibly to include timers in the model.
The result is a very complete language, but with a quite complex semantics. In fact, a problem with the semantic of YAWL aroused (this is stated and solved in [12]) around the OR-join operator (an operator that enable the outgoing tasks when some undetermined number of ingoing tasks have ended). The criterion for determining if an OR-join is enabled (compared with the criteria for the other operators and with workflow operators in general) is very complex, and an algorithm is given to calculate it. IV. C ONCLUSION The different and almost unrelated problems when dealing with workflows explain the plethora of definitions for workflows. The problem seems to be that no basic and specialized formalism for workflows has been defined (YAWL is specialized but not basic, in the sense that the semantics requires underlying constructions). Petri nets and π-calculus are obvious candidates, but they are not a foundational formalism for workflow verification in which more complex tools could be based (as well as λ-calculus is the basis of functional programming). In addition, they are not specialized and, in order to use Petri nets for workflow verification, they must be extended [1]. The standardization of extensions for Petri nets specialized for workflows is an interesting topic, since the literature shows that the notions of tokens and transitions can be successfully applied to specify workflows. Particularly, the time extension (used in a similar problem in [16]) would be an useful contribution: it only appears isolated in [13]. In addition, if more expressiveness is needed, Petri nets with inhibitor arcs [17] become in hand, since they are as expressive as Turing machines, and, if they are primitive many interesting properties can be proven about them. The primitiveness of a Petri net with inhibitors arcs holds, particularly, for nets in which the inhibiting places (i. e. , the places disabling transitions in the case in which there are tokens inside them), are n-bounded. Another advantage of the use of Petri net is, obviously, the large amount of research done about them since thirty years ago. However, problems involving Petri nets are decidable but very complex: the algorithms are proven to run in exponential space. This is difficult to overcome, since any model sufficiently expressive as to describe workflows can be considered a translation of Petri nets. The complexity of the problems for Petri nets is a symptom of the inherent complexity when dealing with concurrency and shared resources. Petri nets are
also shown to be not suitable when expressing all workflow patterns [11]. On the other hand, YAWL is very suitable to write specifications, but its complex semantics makes YAWL hard to reason about the specifications (this fact explains why not many results have been proven for it). An interesting approach would be a formalism for workflow verification with a semantics relying on basic structures as finite state automata or (traditional) Petri nets, in such a way that, if specification of temporal constraints is needed, timed automata or timed Petri nets can be considered instead of their untimed counterparts. In a nutshell, extensions to the language in order to handle different aspects could be added easily by fixing standard extensions for the underlying formalisms. R EFERENCES [1] W. M. P. v. d. Aalst and K. M. v. Hee, Workflow Management: Models, Methods, and Systems. MIT Press, 2002. [2] R. Milner, J. Parrow, and D. Walker, “A calculus of mobile processes, i,” Information and Computation, vol. 100, no. 1, pp. 1–40, September 1992. [Online]. Available: theory.lcs.mit.edu/ iandc/ic92.html [3] ——, “A calculus of mobile processes, ii,” Information and Computation, vol. 100, no. 1, pp. 41–77, September 1992. [Online]. Available: theory.lcs.mit.edu/ iandc/ic92.html [4] W. M. P. v. d. Aalst, “Pi calculus versus petri nets: Let us eat “humble pie” rather than further inflate the “pi hype”,” (Unpublished discussion paper available at is.tm.tue.nl/research/patterns/documentation.htm). [5] M. Reichert, S. Rinderle, and P. Dadam, “A formal framework for workflow type and instance changes under correctness constraints,” University of Ulm, Faculty of Computer Science, Tech. Rep., 2003, available at www.informatik.uni-ulm.de/dbis/01/dbis/downloads/UIB-2003-01.pdf. [6] A. H. M. t. Hofstede, M. E. Orlowska, and J. Rajapakse, “Verification problems in conceptual workflow specifications.” Data and Knowledge Engineering, vol. 24, no. 3, pp. 239–256, 1998. [7] D. Tombros, A. Geppert, and K. R. Dittrich, “Semantics of reactive components in event-driven workflow execution.” in Proceedings of the 9th International Conference on Advanced Information Systems Engineering, 1997, pp. 409–422. [8] J. Magee, J. Kramer, and D. Giannakopoulou, “Behaviour analysis of software architectures,” University of Newcastle, Tech. Rep., 1999, available at www.newcastle.research.ec.org/c3ds/trs/index.html. [9] T. Saridakis and V. Issarny, “Developing dependable systems using software architecture,” University of Newcastle, Tech. Rep., 1999, available at www.newcastle.research.ec.org/c3ds/trs/index.html. [10] C. Karamanolis, D. Giannakopoulou, J. Magee, and S. M. Wheater, “Formal verification of workflow schemas,” University of Newcastle, Tech. Rep., 2000, available at www.newcastle.research.ec.org/c3ds/trs/index.html. [11] W. v. d. Aalst and A. H. M. t. Hofstede, “Yawl: Yet another workflow language.” Information Systems, vol. 30, no. 4, pp. 245–275, 2005. [12] M. T. Wynn, D. Edmond, W. v. d. Aalst, and A. t. Hofstede, “Achieving a general, formal and decidable approach to the or-join in workflow using reset nets.” in Proceedings of the 26th International Conference On Application and Theory of Petri Nets and Other Models of Concurrency, 2005. [13] C. Bettini, X. S. Wang, and S. Jajodia, “Temporal reasoning in workflow systems.” Distributed and Parallel Databases, vol. 11, no. 3, pp. 269– 306, 2002. [14] A. Zarras and V. Issarny, “A framework for systematic synthesis of transactional middleware,” University of Newcastle, Tech. Rep., 1999, available at www.newcastle.research.ec.org/c3ds/trs/index.html. [15] R. Dechter, I. Meiri, and J. Pearl, “Temporal constraint networks,” Artificial Intelligence, vol. 49, pp. 61–96, 1991. [16] T. Gu, P. Bahri, and G. Cai, “Timed petri-net based formulation and an algorithm for the optimal scheduling of batch plants,” International Journal of Applied Mathematics and Computer Science, vol. 13, no. 4, pp. 527–536, 2003. [17] N. Busi, “Analysis issues in petri nets with inhibitor arcs,” Theoretical Computer Science, vol. 275, no. 1-2, pp. 127–177, 2002.