A Petri Net based Approach for Ergonomic Task ...

0 downloads 0 Views 532KB Size Report
1. A Petri Net based Approach for Ergonomic Task Analysis and. Modeling with Emphasis on Adaptation to System Changes. Tom Kontogiannis. 1. Introduction.
A Petri Net based Approach for Ergonomic Task Analysis and Modeling with Emphasis on Adaptation to System Changes Tom Kontogiannis Technical University of Crete Department of Production Engineering and Management Chania, Crete GR 73100, Greece Tel: +30 (0)821 37320, Fax:+30 (0)821 69410, Email: [email protected]

Abstract Task analysis has been extensively used in industrial ergonomics in order to identify system demands and operator plans for achieving goals and coping with high workload. On the other hand, computer modeling has been used for simulating human - machine interactions under a range of conditions, such as changes in task allocation, team structure, operating procedures and system events.

To integrate task analysis and

computer modeling, a new technique is proposed that utilizes Coloured Petri Nets. The proposed technique has been developed on the basis of twelve requirements reflecting aspects of task representation, control and decision making, and usability. A Petri Net notation of tasks and a classification of Petri Net-based plans are fist introduced. This article is mainly concerned with the development of routines for making task sequences and plans adaptable to system changes that could give rise to task interruptions, changes in goal priorities, changes in task allocation, high workload and human errors. An evaluation follows on the basis of the specified requirements for task analysis and task modeling. Relevance for the industry Task analysis and task modeling are important for matching systems demands to operator capabilities. New computer modeling approaches, including Petri Nets, can simulate human-machine interactions under a range of conditions, such as technology innovations, changes in team structure, different allocation of tasks, and high workload.

0

A Petri Net based Approach for Ergonomic Task Analysis and Modeling with Emphasis on Adaptation to System Changes Tom Kontogiannis

1.

Introduction

Modern work systems require reliable human interventions, efficient team cooperation and adaptation to changes caused by technological innovations or unforeseen events. To achieve these work requirements, a range of task analysis techniques have been developed in the domain of industrial ergonomics (Kirwan and Ainsworth, 1992; Luczak, 1997; Schraagen et al., 2000). Task analysis involves the study of activities and communications undertaken by operators and their teams in order to achieve a system goal.

The task analysis process usually involves three phases: (i) collection of data

about human interventions and system demands, (ii) representation of those data in a comprehensible format or graph, and (iii) comparison between system demands and operator capabilities. The primary objective of task analysis is to ensure compatibility between system demands and operator capabilities, and if necessary, to alter those demands so that the task is adapted to the person. Kirwan and Ainsworth (1992) summarized several techniques developed for each phase of the task analysis process.

Verbal protocols, questionnaires, and activity

sampling are examples of techniques for task data collection while flow diagrams, timelines and hierarchical task analysis are helpful for describing operator´s activities and plans.

On the other hand, computer modeling and simulation techniques (e.g., Micro

SAINT) have been developed to simulate the task and evaluate human-machine interactions under a range of conditions. Here different task configurations, team structures, operating procedures and monitoring strategies could be evaluated. Nowadays, advances in information technology and computer modeling would allow the development of task analysis techniques that achieve both phases of task description and task simulation within the same framework.

This is a challenging

1

research issue because, on several occasions, the requirements of task analysis and simulation may be conflicting. Task analysis, for instance, requires descriptions that are easily understood by different specialists (i.e., designers, users and computer experts) while task simulation requires a formal language of specification. In task analysis, we can use abstract descriptions of human activities in cases where we want to allow for some flexibility in the way that tasks are performed, or where we don´t know at a certain stage the optimal sequence of activities. This kind of ‘temporal abstraction’ (Killich et al., 1999) is difficult to find in task simulation and computer modeling since these require precise specification of the flow of activities. There are also different perspectives taken by these two approaches, such as the task analysis focusing on the flexibility and reliability demonstrated by human operators and the task simulation focusing on precise specification and verification of human activities. In this sense, the purpose of this article is to present a new approach for integrating task analysis and computer modeling of performance within a single framework.

Existing approaches to the integration of task analysis and simulation, such

as Micro-SAINT and WinCrew (Laugherty and Corker, 1997) and HOS (Glenn et al, 1992), have provided useful insights.

The proposed approach is based on a Petri Net

representation of human activities, tools, and organizational roles. Petri Nets are cast both in a graphical form and a mathematical formalism. Although most of the modeling work can be done with the Petri Net graph, the mathematical foundation provides the basis for using a variety of formal analysis techniques that can be built into software packages to examine the structural and dynamic properties of Petri Nets. It is possible to examine, for instance, whether the task network contains deadlocks, never-ending tasks, ‘dangling’ tasks that do not contribute anything, and tasks that are activated unintentionally resulting in lack of synchronization. These properties of

liveness,

boundness and fairness can only be investigated with formal analysis techniques, such as reachability graphs, place invariants, and reduction rules. There have been very few applications of Petri Nets to ergonomic task analysis and modeling.

Oberquelle (1987) was among the first researchers who used a general

net approach (i.e., Roles, Functions and Action, RFA nets) to task analysis but has not specified a formal notation for the RFA nets.

The work of Coovert and his colleagues 2

(Coovert and Craiger, 1997; Coovert and Dorsey, 1994; Coovert and McNelis, 1992) is notable for applying traditional Petri Nets to the analysis and training of tasks. In the context of human error and accidents, traditional Petri Nets have been used in the modeling of accident sequences (e.g., Love and Johnson, 1994; Johnston, 1998; Kontogiannis et al., 2000). More recent studies have focused on Coloured Petri Nets that model the flow of coloured tokens or ‘data structures’ throughout the task network instead of the flow of single values.

The work of Levis and his colleagues (Shin and

Levis, 1999; Weingaertner and Levis, 1989; Wagenhals and Levis, 1998) is an example of Petri Net models of command and decision making in military tasks.

On the other

hand, there has been a growing interest in workflow management systems that utilize Petri Nets (e.g., Aalst, 1998a; Salimifard and Wright, 2001a).

It is conceivable that

similar modeling concepts and techniques can also be developed for ergonomic task analysis and modeling, an effort undertaken in the current work. Coloured Petri Nets (Jensen, 1997a, 1997b) have been chosen as a candidate for task analysis and modeling because they provide facilities for hierarchical and timed descriptions, communication of ‘data structures’ (i.e., coloured tokens), simulation of tasks, and formal analysis in terms of reachability or occurrence graphs. Task analysis can be carried out by decomposing tasks into hierarchies of operations and plans in a manner similar to hierarchical task analysis (Shepherd, 1995).

For this reason, a

classification of plans has been developed for Petri Net-based descriptions of human plans and task sequences.

Structural and dynamic properties of the task network are

investigated with the use of occurrence graphs that are automatically constructed by the Design/CPN software package.

The simulation facility of Design/CPN is used for

collecting performance data, such as time to perform tasks, idle time for operators and machines, product throughput, and operator workload.

In addition, task simulation

allows analysts to explore a variety of questions related to aspects of plan implementation.

Some of those questions include the following: Are there enough

resources (human operators and tools) to achieve a plan ? How long will it take to achieve a plan ? Are there any periods where some operators are overloaded or waiting idle ? What deadlocks should be avoided to ensure that the goal is achieved ? What

3

activities should be done in parallel to speed-up a task without creating deadlocks ? These questions have guided the current work in modeling task activities with Petri Nets. This article is organized as follows. The second section presents some requirements for task analysis and task modeling that have been used to develop and evaluate the proposed Petri Nets-based technique.

The third section contains a short

introduction to traditional Petri Nets while the fourth section presents a Petri Net-based description of a taxonomy of plans.

The fifth section raises a number of issues

regarding adaptation of task modeling to system changes. The sixth section presents a generic description of tasks in terms of Coloured Petri Nets and introduces several programming routines for exploring aspects of operator workload and for making task networks adaptive to system changes and human errors.

Finally, a case study is

presented for illustrating the types of workload data that can be collected while an evaluation of the proposed technique is finally made in the concluding section.

2.

Requirements for task analysis and modeling

Previous studies have specified several requirements for task analysis (Luczak, 1997; Stanton and Young, 1999). In an effort to integrate task analysis and task modeling, however, the author has adopted and supplemented the eight requirements developed by Killich et al. (1999). The requirements have taken into account the cooperative nature of modern work and have been applied by Killich et al. (1999) in a comparison of eventbased, state-based, and Petri Net-based techniques for task analysis and modeling. Overall, twelve requirements are presented below to assist in the development and evaluation of the proposed approach. The requirements are grouped into three categories. The first category refers to representational aspects of task analysis and modeling, such as facilities for representing human roles, material and communication resources, work objects and states of tasks. A general requirement and three specific requirements are proposed as follows: 1) Integrated view of control, information and object flows.

The technique should

present an integrated view of activity-related control, transmission and accessibility

4

of information shared by team members in different activities, and flow of physical work objects (e.g., products, equipment and documents). 2) Representation of tools. Physical tools for production work and information tools should be represented to tackle aspects of productivity and work communication. 3) Representation of organizational roles and task allocation.

Cooperative work

requires a clear assignment of operators to organizational roles and a strategy for allocating tasks to reduce high workload and human errors.

Killich et al (1999)

proposed a more strict requirement whereby activities should be grouped to create roles as typical cooperation schemes of individuals.

This may be appropriate for

clerical tasks and overall descriptions of departmental work but we may possibly relax this requirement for jobs in the manufacturing and process control industries. 4) Representation of both ‘states’ and ‘events’. Most techniques (e.g., Micro SAINT, flow charts, and hierarchical task analysis) provide event-based descriptions of tasks whereas the ‘states’ between successive tasks are implicitly mentioned. Event-based descriptions, in terms of activities and transitions, provide a goal-directed view of human behaviour. On the other hand, explicit descriptions of ‘states’ between tasks allow for data-driven aspects of behaviour whereby the environment intervenes in the execution of tasks. The second category refers to aspects of control, command and decision making, such as the control of the flow of activities, the plan or sequence of activities, the facilities for adapting to changes due to high workload and errors, and the decision making and coordination aspects of cooperative work.

Four requirements are proposed below to

account for these aspects of task analysis and modeling: 5) Classification of plans. To facilitate learning the Petri Net notation and make task descriptions easier, a classification of plans is necessary in terms of both verbal and Petri Net descriptions. The classification is also useful in understanding different control sequences and task demands (e.g., time sharing of activities). This is a new requirement that is implemented in section 4.

5

6) Temporal abstraction of the flow of activities. This is one of the most important requirements examined in the Killich et al. study which provided the main ground for a new task analysis technique (Killich et al., 1999). For cooperative and knowledgebased tasks, it is difficult to be certain of the most efficient temporal sequence of activities. In addition, operators may leave it to the last minute and make a decision about their plan ‘on the fly’. To allow for the analyst´s uncertainty and the operator´s flexibility, it is necessary to be able to specify activities in an abstract manner. Killich et al. (1999), for instance, used ‘composite activities’ specifying that two activities could be done in sequence but not precisely which was done first. Hence, an approach of probabilistic sequence modeling as well as sequence abstraction is necessary. 7) Adaptation to system changes. A related and new requirement is the adaptation to changes due to different allocation of tasks imposed by technology innovations, high workload and human errors.

Whilst ‘temporal abstraction’ is easier to achieve with

task analysis methods, ‘online adaptation’ can only be achieved with task simulation. For instance, the allocation of tasks specified at the beginning of analysis may result in high workload due to unforeseen events or errors that affect a member of the team. Adapting task allocation to operator workload is a highly desired facility for task modeling. Alternatively, switching from parallel to serial processing of activities could reduce workload. Facilities for adapting the initial sequence of activities to work changes are very important. 8) Cooperation and decision making.

For distributed work systems it is essential that

team members cooperate efficiently. Hence, the technique should represent clearly the communication channels and cooperation strategies of the team of operators. For complex tasks requiring knowledge-based processing, the technique should represent viable decision strategies under different conditions. The third category refers to utility-driven requirements to support users in coping with complexity, such as complex tasks that can be decomposed into hierarchies of activities, understanding and using the technique, and software support. Two requirements specified

6

by Killich et al. (1999) are presented below supplemented by the requirements of task simulation and formal analysis. 9) Hierarchical modeling.

The technique should support hierarchical modeling of

human activities to make descriptions easier to understand and facilitate the identification of common activities between hierarchies.

Other desirable facilities

include ‘task templates’ for using many instances of the same task concurrently. 10) Transparency and usability. The technique should be easy to learn and use by different specialists (e.g., designers, users and computer experts) that are involved in the task analysis process. 11) Task simulation. For task modeling, it is necessary to be able to simulate the task network in order to inspect its dynamics and collect performance data, such as task duration, idle time, workload, and product throughput. 12) Formal analysis.

This is an optional requirement but it is very important for

verifying structural and dynamic properties of large networks of tasks. Task simulation may achieve some verification aspects of task networks. However, this is becoming increasingly difficult as networks grow-up in size and complexity. This was one of the main reasons for using Petri Nets that have a sound foundation in mathematics and formal analysis techniques.

3. An overview of Petri Nets Petri Nets are a formal and graphical tool appropriate for modeling systems with concurrency. Since their initial conception by C.A. Petri (1962), there have been huge developments that resulted in hierarchical and timed descriptions, coloured tokens, and stochastic models. Petri Nets have been used in modeling many industrial systems with diverse characteristics, such as distributed, asynchronous, concurrent and stochastic (e.g., Valavanis, 1990: Ramaswamy et al., 1997). The ability to construct models with these properties makes Petri Nets an attractive tool for modeling operator behaviour.

7

Petri Nets have both graphical and mathematical dimensions. The graphical aspect makes them a useful tool for representing work systems in a visual manner.

The

mathematical notation of Petri Nets allows for rigorous formal analysis techniques that verify several structural and dynamic properties. Formally, the structure of a Petri Net is a birartite directed graph G = [P,T,A], where P = {p1, p2, p3, ..., pn} is a set of places, T = {t1, t2, t3, ..., tn} is a set of finite transitions, and A = {P x T} U {T x P} is a set of directed arcs. The set of input places of a transition (t) is given by I(t) = {p|(p,t)  A} and the set of output places is given by O(t) = {p|(t,p)  A}. The mathematics of Petri Nets are not discussed here (see Reutenauer, 1990). A High-Level Petri Net graph comprises the following elements: 

A Net graph. It consists of nodes (i.e., places and transitions) and arcs connecting places to transitions.



Places. They are passive elements that can describe beginning and ending ‘states’ of an activity, preconditions, and buffers for holding tools and resources. Places are represented as circles or ovals.



Tokens. Places assume several states characterized by a collection of ‘data items’ called tokens. Each place can hold several tokens of the same data type which are usually depicted as small and solid circles.



Arc annotations. Arcs are inscribed with expressions that may comprise constants, variables and functions. The expressions are evaluated by substituting values for the variables. When this occurs, it results in a movement of tokens from the input to the output places. An inscription on a transition is called guard and evaluates to a Boolean value.



Transitions. They are active elements that can be used for activities transforming input places to output places or transporting objects from one place to another. Many software packages allow us to write codes inside the transitions so that we can control the flow of tokens and the evaluation of arc annotations.



Declarations. They are statements regarding the data types held by places, the types of variables, and the specification of several functions.

8

A function, so called place-marking, assigns a set of tokens of the same type to each place. The state of all place-markings determines the overall state of the task at a given time. Petri Net graphs are executable, allowing the flow of tokens around the net to be visualized; this can illustrate the flow of control and flow of data within the same model. As a graphical tool, a Petri Net is similar to a chart or flow diagram. But Petri Nets go beyond flow diagrams in that they incorporate tokens that are used to simulate dynamic and concurrent activities. Tokens reside in places and move throughout the network as transitions fire. The firing of transitions is controlled by the code segment and the guards. It is useful to make a distinction between ‘enabling’ and ‘firing’. A transition is enabled when all input places contain at least the number of tokens required by each input arc. Two or more transitions are enabled concurrently as far as there is a sufficient number of tokens in all input places.

A transition is fired when all preconditions are

satisfied (i.e., it is enabled) and a trigger occurs.

There are many sort of triggers

including: an instruction from the supervisor, an electronic message, an environmental cue (e.g., high temperature), a time cue (e.g., a delay caused by previous transitions) or a question awaiting a response from the analyst who uses the software for the task network. Triggers are important elements of Petri Nets because they allow for data-driven control of tasks. If no triggers exist then enabling coincides with firing. On the firing of a transition, its input places lose as many tokens as specified on the input arcs while the output places gain as many tokens as specified on the output arcs. It is possible that several enabled transitions can fire concurrently in one step. A Petri Net is a graphical modeling formalism, especially suited for discrete-event dynamic systems with concurrent or parallel events and activities.

4.

A Petri Net based classification of operator plans

A classification of plans in terms of a Petri Net notation is necessary to facilitate understanding of control, decision making and task demands imposed on operators. It is also likely that task analysis is made more transparent and user-friendly to analysts with experience in traditional methods of task analysis. An effort has been made so that the classification of plans satisfied several aspects of the requirement of ‘temporal

9

abstraction’ (Killich et al., 1999).

The classification of plans specified in two versions

of the hierarchical task analysis (Shepherd, 1993; Kontogiannis and Shephrd, 2000) has provided a useful basis for further developments.

The work of Aalst et al. (2000) on

patterns of workflow management systems has also been taken into account since these workflow patterns were based on Petri Nets. Four main building blocks have been used to compose appropriate plans for task analysis (Figure 1).

The AND-split block is used for initiating multiple subtasks that

can be executed concurrently or in any order.

The AND-join block is used for

synchronizing multiple tasks converging into a single point. In order for the AND-join to fire, all of the proceeding tasks should be completed. The OR-split block is an exclusive choice between two or more tasks. The rules for the choice are usually described in the guards or the code segments of the transitions. In the absence of any rules, the OR-split represents a conflict situation where only one transition will fire in random.

Finally, the

OR-join block is used for merging two or more tasks without synchronization. In other words, the merge fires when one of the incoming transitions is triggered. -------------------------------------Insert Figure 1 here --------------------------------------It is very important that blocks are combined in ways that do not result in deadlocks or lack of synchronization. An AND-split followed by an AND-join describe situations of concurrent processing of tasks while an OR-split followed by an OR-join describes a choice between tasks.

However, an OR-split followed by an AND-join

results in a deadlock while an AND-split followed by an OR-join results in lack of synchronization. Although we can easily observe these composition rules in small task networks, an increase in the size of the network makes verification very difficult. Formal analysis techniques, such as graph reduction rules, have been specified for avoiding such problems in large networks (Sadiq and Orlowska, 2000). The proposed classification scheme entails fourteen types of plans, most of which could be grouped into a few super-ordinate categories.

10

The first category includes four sequential plans whereby all tasks should be carried out in sequence only. The four sequential plans in Figure 2 are as follows: 1) Fixed sequences, where two or more tasks are carried out in a specified order. 2) Prioritized sequences, where two tasks are carried out, giving priority to one of them. In Figure 2, task B is executed once a token is received from the secondary place after the firing of task A. 3) Unordered sequences, where two tasks are carried out in any order, but not concurrently. To prevent concurrency, a secondary place is initially marked so that the task that is enabled first, takes the token, fires and returns it to the secondary place for the second task. 4) Interleaved sequences, where tasks are done in sequence, but overlap for a period of time. Figure 2 shows that task A can be broken into two parts (A1 and A2) and task B starts after part A1 is executed and a certain delay has been elapsed. The second category includes four discretionary plans whereby the sequence of activities is at the discretion of the operator. This category accounts for situations where the operator is allowed some flexibility, or is allowed to make ‘on-the-fly’ decisions about the sequence of tasks. It also takes into account situations where the analyst is uncertain about the precise flow of activities.

In this sense, discretionary plans satisfy

some aspects of the requirement for ‘temporal abstraction’. The four discretionary plans are: 5) Discretionary inclusive plans, where two tasks should be performed in any order or concurrently. The AND-join (Figure 2) is to ensure that both tasks are performed regardless of which task is executed first. 6) Discretionary exclusive plans, where only one of two tasks are performed at the discretion of the operator. Tasks A and B (Figure 2) are competing for the common token stored in the secondary place. 7) Optional plans, where the operator has the option of doing one or both tasks in any order.

The guard in Transition T (Figure 2) is required to prevent lack of

11

synchronization since transition T may fire again for the second task, thus, unintentionally activating tasks that follow in the net. 8) ‘N out of M join’ plans, where the operator has the option to perform only n out of m tasks. The ‘N out of the M join’ has been specified by Aalst et al. (2000) but a simpler representation may be possibly found when code segments are used. -------------------------------------Insert Figure 2 and Figure 3 here --------------------------------------The third category includes the following three choice plans whereby an exclusive choice is made between tasks: 9) Explicit choices, where a selection is made between tasks according to the conditions specified in the guards and the code segments of the transitions. 10) Implicit or deferred choices, where a selection is made between two tasks but the actual moment of choice is deferred until later. The final choice will be determined by the moment of activation of triggers A and B (Figure 3). 11) Probabilistic choices, where a selection between tasks is made according to probabilities. Probabilistic choices allow for non-deterministic descriptions of task sequences. The distinction between explicit and deferred choices has been proposed and explored in several studies (Aalst, 1998a; 1998b).

Deferred choices are not possible in ‘event-

based’ descriptions, such as flow or block diagrams.

Explicit choices correspond to

goal-driven views of human behaviour whilst deferred choices to data-driven behaviour. Triggers are external events signaling the execution of transitions that have already satisfied all preconditions.

In the context of task analysis, Dix et al. (1998) have

proposed a classification scheme of triggers that includes: time tags, messages, memory cues, environmental cues, and automatic triggers. The remaining three plans refer to parallel, conditional and iterative processing of tasks and are presented below:

12

12) Parallel plans, where two or more tasks are performed concurrently.

In the

configuration of Figure 3, the two tasks start concurrently (see the AND-split) but may terminate at different times. The AND-join block simply ensures that both tasks have been executed before starting another task in the net. 13) Contingent plans, where two tasks are executed in a specified sequence but the performance of the second task is dependent upon the presence of a cue. This is a variant of the sequential type plan but requires some synchronization (AND-join). 14) Remedial or iterative plans, where performance of a task is repeated until a condition is satisfied (see condition x>=10 in Figure 3). It is conceivable that more subtle specifications can be made for certain plans whenever the analysts believe that a situation calls for such a need. For instance, Raposo et al. (2000) specified different coordination mechanisms for parallel processing, such as, tasks A and B finish together but start at different times, tasks A and B start and finish together, task B starts after task A and finishes before task A, and so on. A finer specification may, however, increase the complexity of task representation in the graph. The classification scheme is useful in helping analysts to transfer from manual forms of task analysis to Petri Net-based descriptions. Almost half of the proposed plans have been specified so that they satisfy many aspects of the requirement of ‘temporal abstraction’.

In specific, the four discretionary plans, the unordered sequence, the

deferred choice and the probabilistic choice can allow the analyst to specify the flow of tasks in a more abstract fashion.

In general, Petri Nets offer a formal notation for

precise descriptions of task sequences. The more abstract the task description (i.e., the fewer the constraints on the task sequence) the more difficult it becomes to represent the description in a Petri Net notation.

5.

Adapting task modeling to system changes

The classification of plans has been aimed at satisfying some of the requirements of task analysis. The other dimension of task modeling and simulation requires facilities for exploring different configurations of tasks, operators and tools.

Such exploration is

13

necessary to examine how the task network adapts to system changes due to technology innovations, high workload, human errors, and external events.

Simulation facilities

enable analysts to identify problems and explore how to improve the task network by making changes to the structure and functioning of the net. On the other hand, adaptive task modeling requires that analysts have specified in advance alternative routes and functions for coping with potential problems. Adaptive modeling entails ‘online’ adaptation to problems and system demands. Aalst (1998a) distinguished between expected and unexpected changes, In cases of expected changes or problems, the analyst may specify in advance alternative flows of tasks (i.e., choice plans) so that different routes are taken for different problems. In cases of unexpected changes, the analyst should specify a task network that adapts itself to those changes. Incorporation of many choice plans or alternative routes may clutter the network but, on the other hand, adaptation to unexpected changes may increase significantly the requirements for software modeling and code segments. The proposed approach examines three types of online adaptation: 1) Adaptation of resources. The technique should allow adaptation of material and human resources in order to respond to problems and required changes. 2) Adaptation of goals and tasks. The technique should be able to handle changes in the priorities of goals, sequences of tasks and dependencies between tasks. For instance, a change in the situation may imply that a new goal should be given priority; hence, all goals in progress should be ‘paused’ and ‘resumed’ later.

In other cases, a

parallel sequence of tasks should be changed into a serial one in order to respond to a system change or to reduce workload. 3) Adaptation of performance to workload and errors.

High workload and errors are

likely to occur in the early specification of the task network. The technique should specify different types of errors and the likely consequences on human performance. A similar requirement exists for high workload that could slow down performance or affect accuracy. Coloured Petri Nets offer a good basis for creating adaptive task models.

The

Design/CPN software package has been chosen as a candidate platform for generating

14

adaptive task models.

Some types of adaptation could be achieved by generating

suitable ‘data structures’ or coloured tokens while more advanced adaptation may require further developments in software.

The availability of the Standard ML language was

another reason for choosing Design/CPN.

6. A proposed technique for ergonomic task analysis and modeling This section presents a generic representation of task models that is appropriate for a wide range of manufacturing and process control tasks.

The first part specifies

appropriate structures for goals and tasks illustrated in the context of a job assignment process while the second part presents an overview of the generic task model. The third part examines some planning aspects of the job, such as selection of goals and execution of suspended tasks. Finally, the fourth part examines some task sequences and considers several adaptations of them to system demands.

6.1 Definition of goals and tasks based on Coloured Petri Nets Coloured Petri Nets constitute an extension of traditional nets in that they use coloured tokens which accommodate ‘data structures’ instead of single values. Colours are analogous to ‘data types’ in a programming language specifying the characteristics and attributes of entities in the model.

For instance, a token of colour ‘record’ can have

several ‘fields’ that hold the values of the token´s attributes.

The task network in

Figure 4 shows an example of a model of the job assignment process of operators and tools to several goals. Incoming goal tokens are stored in the place Goals as records with fields {Goal ID, Goal Priority, Goal Status}. Four tool tokens are stored in place Tools and two operator tokens are in place Staff represented as a tuple or pair of attributes specifying (Name, Efficiency). -------------------------------------Insert Figure 4 here ---------------------------------------

15

The code segment inside transition Assign specifies how staff and tools are assigned to different goals. When this transition fires, one token is removed from each input place and a new token is inserted in the place Task_in. The task token is a pair of records (p,s) referring to tasks before and after any places of colour Task. This notation makes more sense when monitoring task changes from transition Task_A onwards. The first record (p) contains information about the previous task, such as {Goal ID, Task ID, Man, State of Task}. The second record (s) contains information about the following task, such as {Goal ID, Task ID, Man, Go}. The field Go takes a Boolean value (true/false) specifying whether the following task should be executed or skipped. It is conceivable that another specification of task tokens could incorporate information about the efficiency of the tools as well as the efficiency of task execution. In that case, the code segments in the transitions should calculate the attribute ‘task efficiency’ on the basis of the efficiencies about the staff and tools. Information about task efficiency could be carried along the network by extending the task token to include a special field for this attribute. It is at the discretion of the analyst to specify the appropriate data structures contained in the task tokens in the best way they think. Task tokens and goal tokens provide ‘local’ control of the flow of activities and staff changes. That is, changes mainly affecting tasks immediately following the current one. To introduce ‘global’ control, there is a need for ‘buffer’ places which keep token information and make it available to places directly connected to them.

In general,

‘global’ control is only available to places directly connected to ‘buffer places’ but this may cause problems in large networks since too many arcs could clutter the network. An alternative would be to specify global reference variables that store information and make it available to all places and transitions. Another reason for having global reference variables is because the computer system could have a better view of the state of task attributes.

The computer system

keeps track only of information that is stored or exchanged in transitions that are active at a certain moment. Information that is not carried on active transitions or ‘buffer’ places is not visible by the computer system. In this sense, a global reference variable of type ‘Array’ would store a lot of information accessible globally. In addition, it would be

16

possible to specify supervisor roles for human operators so that they maintain the ‘big picture’ of the state of all tasks and information. For these reasons, it is useful to declare a global reference array (Notepad) that would hold the following task information: (1) identification of the goal (gid), (2) identification of the task (tid), (3) priority of the goal (pri_g), (4) priority of the task (pri_t), (5) state of the task (state), (6) required accuracy for the task (exact), (7) achieved accuracy (exact_fin), (8) primary person for the task (guy_1), (9) secondary person for the task (guy_2), and (10) clocktime (hour). The secondary person refers to the operator who is also able to perform the same task when the primary operator is overloaded or made an error. Other types of information that could be possibly stored in this array (Notepad) could be ‘required’ and ‘achieved’ speed of the task, ‘required’ and ‘achieved’ efficiency of the goal, and so on.

Access to the global array and changes to the task

tokens can be made in the code segments of the transitions. The analyst has to balance the types of information to be held in the task token and the global array (Notepad) according to the problem at hand. It is also conceivable that some task information can be stored as global reference variables; for instance, operator workload (wk) is represented as a list of pairs of workload values and clocktime. Hence, the workload of each operator at different times can be made accessible globally in order to adapt the task sequences and the material resources to the workload of operators.

6.2 An overview of the generic task model Coloured Petri Nets allow hierarchical descriptions of nets. The nodes (i.e., places and transitions) of the network can be grouped together and form a ‘substitution’ transition which can be depicted as a rectangular with double lines and symbols HS. Substitution transitions are the means of creating nested hierarchies of nets in Design/CPN. A normal rectangular can be used for transitions representing tasks while a small rectangular can be used for transitions that are simply required for creating AND-split or AND-join blocks. Places are depicted as ovals or rectangulars with rounded edges while trigger places can be depicted as circles.

Trigger places hold tokens that are simply used for triggering

events and messages. It is also useful to represent ‘buffer’ places (depicted as ovals with

17

double lines) for storing task information accessible to all places directly connected to the buffer. A generic task model is composed of several ‘substitution’ transitions representing high-level processes, such as Assess, Assign, and Plan (see symbol HS in Figure 5). A trigger token (r) fires transition Init in order to initialize all variables and generate three goals in the place In_1. A colour Goal_buffer has been specified for this place that holds a list of goals for further processing in the ‘substitution’ transition Assess (see Appendix I for a declaration of colours). A number of transitions inside Assess are used for updating the display, monitoring process parameters, and selecting goals for processing.

In ‘substitution’ transition Assess (see Figure 10 for details) a goal is

selected and given the highest priority value. As a result, the global array (Notepad) updates all its elements that contain information about this goal. In the generic task model, a global reference variable is used (g_in) for keeping track of goals currently in progress (see code segment of transition Gate in Figure 5). The goal token enters ‘substitution’ transition Assign (see Figure 4) and carries information about goal identification, priority and status (i.e., complete or incomplete). Goals are assigned to staff according to the conditions specified in the code segment of transition Assign. In the same way, tools can be assigned to staff but this process has been omitted from the task model. Salimifard and Wright (2001b) present a more elaborate version of this job assignment process.

In the current model, the assignment is

made each time a goal passes through transition Assign.

It is also possible to change the

assignment of tools for each individual task. Salimifard and Wright (2001b) present such a task model emphasizing aspects of dynamic tool assignment in workflow management systems. -------------------------------------Insert Figure 5 here --------------------------------------The output of the assignment is the creation of task tokens that enter the ‘substitution’ transition Plan for the actual processing of tasks. The transition Update is

18

used for updating the state of goals held in the place In_1.

The routine FILTER in the

code segment inspects all tasks in the global array (Notepad) whose states are either ‘aborted’ or ‘blank’ (i.e., not done yet). If the goal being processed contains tasks in the above states then its status is considered to be incomplete. In this case, a new goal token is generated and attached to the list of goals in the place In_1. Goals that have been completed (i.e., #status(goal)=complete) are not attached to the list of goals (tL) in the goal buffer.

Incomplete goals are processed again in ‘substitution’ transition Assess

and when their priority rank and system conditions allow, they are sent back to transitions Assign and Plan for further processing.

6.3. Planning aspects of the task model There are two important aspects of the planning process, that is, (i) how goals are selected for further processing, and (ii) how ‘suspended’ or ‘paused’ tasks are being handled.

Figure 6 shows that an OR-split followed by an OR-join block is used for

selecting between the three goals. ‘Substitution’ transitions Goal_1, Goal_2 and Goal_3 represent hierarchies of tasks that are selected in accordance to the conditions specified in their guards.

For example, the guard [#gid(s)=1] of transition Goal_1 accepts task

tokens whose identity number is 1 only.

Transition Return is used for returning tool

tokens and staff tokens to their original places Tools and Staff. These places are ‘fusion’ places, that is, places whose data are also fused inside the structure of ‘substitution’ transition Assign.

Fusion places (see symbol FG in Figure 6) is a convenient way for

modeling shortages of resources since their tokens can be absorbed by several connected places. The handling of suspended or paused tasks is a challenging topic for modeling in Petri Nets. In real life, tasks can be ‘suspended’ at a certain state and ‘resumed’ at a later period of time. In Petri Nets, the execution of transitions cannot be stopped once they are fired.

To overcome this problem, a task may be specified as a group of two

transitions Start and End, separated by a place Process, as advocated by Salimifard and Wright (2001a). In the proposed technique, tasks are represented as a single transition that always fires but the state of the task is updated by the code segment of the transition.

19

If a task is suspended, for instance, the code segment assigns the value ‘aborted’ to the variable state of the global array (Notepad). The variable state can take the values: aborted, done, blank (i.e., task not visited yet) and passed (i.e., task done in a previous visit). This method makes modeling of errors and dependencies easier as will be shown in the next section.

An error, for instance, can be represented as a global reference

variable (e.g., call_error) that is inspected by all code segments so that the state, duration and speed of each task is adjusted accordingly. -------------------------------------Insert Figure 6 here --------------------------------------Suspended tasks should be resumed at later periods of time. This can be a difficult job in Petri Nets because re-activation of a transition requires that all input places are filled with tokens. However, this may cause unintentional activation of other transitions having common input places.

The problem is exacerbated in cases where

resumption is only possible at later stages when the task token has already left a subpage or hierarchy and visited another one. The approach adopted here is to allow a goal with an incomplete state to re-visit a task network and activate any tasks in ‘aborted’ states (see section 6.4 for details). However, even this approach may result in delays in executing some important tasks (i.e., high priority tasks) that have been suspended. For this reason, any tokens entering ‘substitution’ transition Plan can execute some high priority pending tasks. This is the purpose of transition Attend to Pending Tasks that activates and executes suspended tasks of high priority that may even belong to other goals.

Pending tasks of low priority are resumed by tokens in the respective subpages

of the particular goal they belong to (see further details in Figure 7).

6.4 Adapting task sequences to workload and human error Goals that are selected for further processing in the ‘substitution’ transition Plan can be represented as hierarchies of tasks and sequences. An example of a task sequence for goal 1 is shown in Figure 7.

The conditions of firing of transitions are based upon the

20

information provided to the routine MANAGE_TASK, such as, the identity of goals and tasks, the workload for the task, the duration of the task, the identity and responsible person for the next task, and the identification of the task token (p,s). The duration of each task is specified in the inscription of its output arc (e.g., @+delay). Although the transition fires immediately, the task token is made available to the output place when the delay has been elapsed. The task information contained in the global reference array (Notepad) is annotated with new data and the time at which the task was completed is inserted in the variable hour of the global array; in this manner, the movement of task tokens is synchronized with the annotation of information. The code segments in the transitions specify whether a task is suspended, takes more time, degrades in accuracy, or is done erroneously.

Suspended tasks can be

performed during another visit of a token when transition Assess (Figure 9) judges that system conditions allow the completion of goal 1; another token is sent into the task structure of Figure 7 which executes the suspended tasks. Hence, there is a need for all code segments to inspect whether a task has been previously done or not. If tasks have already been done, the code segments assign the value ‘passed’ to the state of the completed tasks. -------------------------------------Insert Figure 7 here --------------------------------------The task sequence in Figure 7 contains a mixture of serial processing (e.g., tasks 6 and 7), parallel processing (e.g., tasks 8 and 9) and choices between alternatives (e.g., task 10 and 11). Parallel processing of tasks can be done either by the same or different operators. In the former case, parallel processing may result in errors when the workload is high. Adaptive task sequences should be able to switch between parallel and serial processing in order to prevent errors from occurring. To achieve this sort of adaptability, the AND-split block has incorporated the routine ADJUST_SEQUENCE that may delay the execution of task 8 (i.e., @+delay_8) or task 9 by a certain amount of time (e.g., 60 seconds) when the projected workload of time sharing is above a default value (e.g., 270 units).

For the calculation of the workload, routine ADJUST_SEQUENCE uses an

21

additive model that takes into account the loads for tasks 8 and 9 (e.g., 30 units) and a penalty for executing the tasks in parallel (e.g., 40 units). If the projected workload from doing the two tasks together is above the limit then the task with the higher load is executed first followed by the other task. The workload for the operator who performs these tasks is updated, for the first time, when transitions Task_8 and Task_9 fire and, for a second time, when transition Adjust Workload fires. This transition incorporates a routine ADJUST_WORKLOAD in the AND-join that checks whether the two tasks have been

done

in

parallel

or

not

and

accordingly

applies

a

penalty

(e.g.

addup_work(man,40,h)). From this discussion, it appears that workload seems the most important variable for adapting task sequences. High workload could be one reason for making adaptations to the sequence of tasks, changing the allocation of tasks to operators, and making errors. The Undifferentiated Model of Resources (see Wickens et al., 1988, for a comparison of workload models) has been adopted which presents a simple additive model of workload that adds fixed penalties in cases of parallel processing. More sophisticated workload models have been used in off-the-shelf software packages, not necessarily based on Petri Nets (e.g., WinCrew). A global reference variable (wk) is assigned to each operator that stores information about workload at different clocktimes.

Workload variables are

inspected by code segments to decide the outcome of task execution. Human errors are modeled separate from workload, although they can be the result of high workload.

The consequences of errors should be integrated with the usual

execution of tasks in the firing of transitions.

Six types of error consequences have been

considered in the routine TASK_ERROR (see Appendix III) namely: 1) The current task is not performed due to an error of omission (i.e., task is aborted). 2) The current task takes more time to complete which causes delays. 3) The accuracy or efficiency of the current task is reduced. 4) Another colleague, who is not overloaded, takes over the task. 5) The task results in higher workload than originally specified. 6) There is an impact to another task.

22

The last consequence refers to a side-effect that is implemented in terms of a global reference variable call_off containing information about the identity of the affected task and goal.

The code segments of transitions inspect the call_off variable to examine

whether the current task should be aborted due to a side effect. Errors can be triggered by high workload or by any transition specified by the analyst in order to examine the effect of human errors onto the task network. In all cases, a global variable call_error is used that stores information about the identity of the goal, the task, the type of error, possible delays (i.e, error type 2), alternative operators for the task (error 4), degradation in accuracy (error 3), increases in workload (error 5), side effects (error 6), and lastly the probability of this error occurring. -------------------------------------Insert Figure 8 here --------------------------------------1) All transitions that represent tasks contain a code segment that uses the routine MANAGE_TASK which makes all these arrangements about the effects of workload and human error on task execution. Figure 8 shows a schematic of the routine MANAGE_TASK

that

TASK_ALLOCATION.

incorporates

subroutines

TASK_ERROR

and

The software code was written in Standard ML and is

presented in Appendix III.

The only information that the analyst has to specify in

routine MANAGE_TASK is: the identity of the current goal and task, the workload for doing the task, the duration of the task, the identity of the next task and the name of the staff, and finally, the identity of the task token (p,s).

7.

Performance data collected in a case study

The previous figures of task modeling were part of a case study that was used to collect data about the workload of two operators doing a process control task. The case study also helps illustrating several aspects of the assessment process specified in ‘substitution’ transition Assess. A supervisor (called Tom) and an assistant operator (called Bil) are responsible for doing a process control task which requires a lot of parallel processing.

23

Overall, thirteen tasks should be performed, each of them lasting for two minutes only. Five tasks were grouped as goal 3, six tasks were grouped as goal 1 and two tasks as goal 2. The priorities of goals, the priorities of tasks and the personnel in charge are shown in Table 1. Incidentally, these columns comprise the elements of the global reference array (Notepad) in its initial state. Although no data about task accuracy were collected, the ‘required’ accuracy values in the sixth column were included for illustrating the elements of the global array.

In general, the calculation of task accuracy requires a small

extension of the routines to allow for the incorporation of values about staff and tool efficiency. A hierarchical task analysis of operator jobs is presented in Figure 9 in a way that makes the Petri Net based task analysis easier to understand. -------------------------------------Insert Table 1 and Figure 9 separately --------------------------------------A generic model for the assessment process is shown in Figure 10 that is illustrated by reference to this particular case study. The two operators monitor the state of a key parameter on a display and make decisions about goal priorities. The displayed parameter (e.g. temperature) initially takes the value ‘medium’ for 5 minutes and then its state is determined by a probabilistic routine specified in the code segment of transition Update_Display.

During the period 5 to 10 minutes, there is an 80% chance that

temperature becomes high but, after that, this chance becomes 50%. Transition Monitor specifies that goals 3 and 1 should be selected when temperature is medium and high respectively. Goal 2 is executed when both goals 1 and 3 have been completed successfully. The frequency at which the operators inspect the temperature is determined by the code segment of transition Decide. In this case study, however, we have assumed that the operators know how often the display is updated and follow a similar strategy (i.e., delay_1=delay_2). The assessment process starts when the system clock shows 1 time unit (e.g., seconds). Trigger places P1 and P2 are initially marked in order to activate the transition Monitor which sends an integer token about the identity of the recommended goal to transition Decide.

A tuple token (i.e., a pair of variables) is also sent to transition

24

Update_Display to provide information about the current clocktime and the temperature value (par,h1). The probabilistic routine in transition Update_Display sends a new tuple token (par,time()) to the trigger place P2 taking into account the clocktime (h1) and the temperature value (par). -------------------------------------Insert Figure 10 here --------------------------------------Transition Decide works as follows. If the temperature is medium and goal 1 is not in progress then a goal token is sent to place Out-1 for processing goal 3. An integer token is also made available to the trigger place P1, after a certain delay, indicating the current clocktime (h2). The delay_2 corresponds to the frequency at which operators sample information from the display. The code segment in transition Decide specifies that goal 3 should be processed first but, when the temperature becomes high, goal 1 should take priority and executed, even if this means suspending the performance of goal 3. The supervisor is responsible for monitoring the display and making decisions; as a result, his workload increases by 20 units.

However, when the workload of the

supervisor is above a default value the assistant operator takes over the monitoring task. The history of goal processing in this case study is as follows.

Operators start

with goal 3 but change to goal 1 after 5 minutes because the temperature becomes high (i.e., probability is 80%). Tasks 1 and 2 are completed but tasks 3, 4 and 5 are aborted and the list of goals in the place In_1 is annotated to indicate that the state of goal 3 is incomplete. Subsequently, goal 1 enters the task structure in Figure 7 and performs all tasks except task 6 because an error has been simulated by the analyst. The information in the place In_1 is annotated to indicate that the state of goal 1 is incomplete.

The

operators decide to proceed with goal 3 because the temperature has taken the value ‘medium’ (i.e., probability 50%). As the goal token for goal 3 enters transition Attend to Pending Tasks (Figure 6) the suspended task 6 is executed. Although this task belongs to goal 1 it is executed here because of its high priority.

The goal token for goal 3

activates the ‘substitution’ transition Goal_3 and executes all tasks that have been aborted before.

25

Finally, operators execute tasks 12 and 13 which belong to goal 2. However, the primary operator (i.e., Bil) is unable to perform goal 2 because his workload has exceeded a default value (e.g., w_max>=270 units).

As a result the supervisor (i.e.,

Tom) executes the two tasks since he has been assigned to be the secondary operator. Figure 11 shows a time trace of the workload of the two operators. As it can be seen the distribution of workload is uneven for most of the time. Another task allocation strategy could have been designed to keep workload differences within smaller boundaries. This can be easily done with a small modification of

routines MANAGE_TASK and

TASK_ALLOCATION. -------------------------------------Insert Figure 11 here --------------------------------------8. Conclusion The proposed technique was aimed at integrating both task analysis and task modeling within a single framework.

The twelve requirements have been derived from the

existing literature in order to provide a basis for developing and evaluating the new technique.

The requirements of adaptation to system changes, task simulation and

formal analysis have been proposed by the author to allow for task modeling and simulation. The requirement of state-based and event-based descriptions supplements the eight requirements suggested by Killich et al. (1999).

Three main contributions

have been made in the current work. The first one concerns the development of a classification scheme of plans to facilitate learning of Petri Net-based descriptions, transfer of expertise from manual methods of task analysis and identification of different control sequences and task demands (e.g., time sharing of activities).

The second

contribution concerns the development of a Petri Net notation of task structures as illustrated in the job assignment process of Figure 4. The final and main contribution regards the writing of routines in Standard ML for achieving an adaptation of task sequences to system changes that could give rise to task interruptions, changes in goal priorities, changes in task allocation, high workload and human errors.

26

A concise evaluation of the technique can be made in terms of the twelve requirements presented in section 2.

Other studies utilizing Petri Nets in workflow

management systems are also cited here in order to provide support for some merits of the proposed approach that, although within its reach, have not been demonstrated in this article. The proposed technique presents an integrated view of control and information flows, but not of object flows. Attributes of objects could be modeled in the task tokens but the movement of objects (e.g., documents, equipment, and catalogues) would require other techniques such as the Role Function Action (RFA) nets proposed by Oberquelle (1987). Representation of physical tools and organisational roles can also be modeled as attributes of task tokens.

A good example of another approach has been the work of

Salimifard and Wright (2001a; 2001b) in the context of workflow management systems. However, the strict specification of organizational roles suggested by Killich et al. (1999) - where the whole task or work organization is modeled in terms of roles - has not been achieved here.

The Role Function Action nets (Oberquelle, 1987) satisfy fully this

requirement but possibly this is not necessary in the manufacturing and process control industries.

Instead, the requirement of state-based and event-based descriptions is more

important for those industries.

State-based descriptions are at the heart of Petri Nets and

provide a convenient way for modeling data-driven aspects of the task. This requirement is fully satisfied by the new technique. The current work has focused mainly on aspects of control, command and decision making. In this sense, the proposed classification of plans seems to be valuable in achieving several requirements of task analysis. The plan taxonomy has been the result of an integration of hierarchical task analysis (Kontogiannis and Shepherd, 2000) and workflow patters (Aalst et al., 2000). It may be argued that a hierarchical task analysis can easily be transformed into a Petri Net with the use of the plan taxonomy. To a certain extent, the plan taxonomy can accommodate some aspects of temporal specification of the flow of activities. The discretionary plans, the unordered sequence, the deferred choice and the probabilistic choice plans allow for more abstract descriptions of activities. In this sense, several aspects of this requirement can be fulfilled with the taxonomy of plans.

It is very difficult to satisfy completely this requirement because 27

Petri Nets have a formal notation requiring high precision in specifying how a task is performed in a network. Adaptation to system changes has been satisfied in many respects by the new technique. At least, the basic architecture for introducing some types of adaptation has been presented here.

The new technique can accommodate adaptations of task

sequences to system demands, such as switching between parallel and serial processing, changing the allocation of tasks to personnel, changing the priorities of goals and tasks, interrupting tasks and resuming tasks. In addition, workload and human errors can also be modeled in order to collect performance data or explore their effects on the task structure. It is desirable that more sophisticated models of workload and human error are developed in future. The requirement of cooperation and decision making has not been explored adequately in this article. Other studies in the context of workflow management (Aalst, 2000) and command and control military systems (Shin and Levis, 1999) have shown that Petri Net-based descriptions of collective tasks have been very useful in identifying points of lack of synchronization.

The new technique does not show explicitly

interactions between team members since these are not made visible in the Petri Net graph. This weakness was a side effect of the adaptability in the allocation of tasks when high workload and errors occurred.

Further research is needed to explore how we can

achieve adaptability by depicting team members and communications on the Petri Net graph rather than hiding them inside the code segments. The requirement of decision making has been partially satisfied. The taxonomy of plans makes an important distinction between explicit and deferred choices while several types of control sequences are illustrated. However, many aspects of decision making have not been made visible in the Petri Net graph because they were incorporated in the code segments.

In general, a representation of dynamic decision making may

clutter the graph with too many arcs. On the other hand, reliance on code segments would reduce visibility of decisions on the net.

There are very few studies that have

developed flexible and dynamic models of decision making based on Petri Nets (e.g.,

28

Shin and Levis, 1999; Wagenhals and Levis, 1998).

This area needs further

developments by Petri Net researchers. The utility driven requirements have been satisfied to a large extent; this argument is in agreement with the Killich et al study. The new technique presents a hierarchical view of task description whereby complex tasks are replaced by nested hierarchies. However, the descriptions are not truly hierarchical in the sense that Lakos (1997) argues. It is not possible, for instance, to have ‘abstract’ task tokens at the different levels of the hierarchy that change their structure as they move between levels.

Lakos (1997)

presented an intriguing architecture for achieving this sort of flexibility in task tokens. The requirements of transparency and usability, on the other hand, have been satisfied to a certain extent. However, some knowledge of Standard ML is needed in order to specify the code segments and use the existing routines proposed in this article.

Finally, the

requirements of task simulation and formal analysis have been fully met by the notation of Petri Nets. Overall, the new technique demonstrates that task analysis and task modeling can be integrated in a single framework. Several aspects of adaptability have been explored and implemented although, in some cases, it was not possible to show the demonstrated adaptability on the Petri Net graph.

New work has already started in exploring how to

achieve this sort of adaptability on the Petri Net graph rather than on the code segments. Research in Petri Nets is growing and the results seem to be applicable to ergonomic task analysis and modeling.

Especially, developments in object-oriented Petri Nets are very

promising for incorporating aspects of inheritance of task attributes and evolutionary changes in work design (e.g. Aalst and Jablonski, 2000; Lakos, 1995).

29

9. References van der Aalst, W.M.P., 1998a. The application of Petri Nets to workflow management. The Journal of Circuits, Systems and Computers, 8, 21-66. van der Aalst, W.M.P., 1998b. Three good reasons for using a Petri net-based workflow management system. In: Wakayama, T. (Ed.), Information and Process Integration in Enterprises: Rethinking Documents. Kluwer Academic Publisher Publishers, Norwell, pp. 161-182. van der Aalst, W.M.P., 2000. Loosely coupled interorganizational workflows: Modeling and analyzing workflows crossing organizational boundaries.

Information and

Management, 37, 67-75. Van der Aalst, W.M.P.,

Jablonski, S., 2000.

Dealing with workflow change:

identification of issues and solutions. Computer Systems Science and Engineering, 5, 267-276. van der Aalst, W.M.P., Hofstede, A.H., Kiepuszewski, B., Barros, A.P., 2000. Advanced workflow patterns.

In: Etzion, O., Scheuermann, P. (Ed),

7th International

Conference on Cooperative Information Systems (CoopIS 2000), Lecture Notes in Computer Science, Vol. 1901, Springer-Verlag, Berlin, pp. 18-29. Coovert, M.D., Craiger, J.P., 1997. Modeling performance and establishing training criteria in training systems. In: Ford, K., Kozlowski, S., Kraiger, K., Salas, E., Teachout, M. (Eds.), Improving Training Effectiveness in Work Organization. Lawrence Erlbaum Associates, Hillsdale, New Jersey, pp. 47-71. Coovert, M.D., Dorsey, D.W., 1994. Simulating individual and team expertise in a dynamic decision making environment.

In: Verbraek, A., Sol, H.G., and Bots,

P.W.G. (Eds.), Proceedings of the Fourth International Working Conference on Dynamic Modeling and Information Systems, Delft University Press, The Netherlands, pp. 187-204.

30

Coovert, M.D., McNelis, K., 1992. Team decision making and performance: A review and proposed modeling approach employing Petri nets. In: Swezey, R.W., Salas, E. (Eds.), Teams: Their Training and Performance. Ablex, Norwood, New Jersey, pp. 247-280. Design/CPN version 4.0., Meta Software Corporation. Cambridge MA. Also available from

the

Department

of

Computer

Science,

University

of

Aarhus

at

http://www.daimi.au.dk/DesignCPN.

Dix, A., Ramduny, D., Wilkinson, J., 1998. Interaction in the large. Interacting with Computers. 11, 9-32. Jensen, K. , 1997a. Coloured Petri Nets: Basic Concepts, Analysis Methods and Practical Use. Volumn 1. Basic Concepts. Monographs in Theoretical Computer Science, Springer-Verlag, Berlin. Jensen, K., 1997b. Coloured Petri Nets: Basic Concepts, Analysis Methods and Practical Use. Volumn 2. Analysis Methods. Monographs in Theoretical Computer Science, Springer-Verlag, Berlin. Johnson, C., 1998. Representing the impact of time on human error and systems failure. Interacting with Computers, 11, 53-86. Glenn, F.A., Schwartz ,S.M., Ross, L.V., 1992. Development of a Human Operator Simulator Version V (HOS-V): Design and Implementation. U.S. Army Research Institute for the Behavioral and Social Sciences. PERI-POX, Alexandria, VA. Killich, S., Luczak, H., Schlick, C., Weissenbach, M., Wiendemaier, S., Ziegler,J. 1999. Task modeling for cooperative work. Behaviour and Information Technology, 18, 325-338. Kontogiannis, T., Shepherd, A., 1999. Training conditions and strategic aspects of skill transfer in a simulated process control task. Human Computer Interaction, 14, 355393.

31

Kontogiannis, T., Leopoulos, V., Marmaras, N., 2000. A comparison of accident analysis techniques for safety-critical man-machine Systems.

International Journal of

Industrial Ergonomics, 25, 327-347. Lakos, C.A., 1995. From Coloured Petri Nets to Object Petri Nets. Proceedings of 16th International Conference on the Application and Theory of Petri Nets, Lecture Notes in Computer Science, Vol. 935, Springer-Verlag, Berlin, pp. 278-297. Lakos, C.A., 1997. On the abstraction of Coloured Petri Nets. Proceedings of 18th International Conference on the Application and Theory of Petri Nets. Lecture Notes in Computer Science, Vol. 1248, Springer-Verlag, Berlin, pp. 42-61. Laughery, K.R., Corker, K.M., 1997. Computer modeling and simulation of humansystem performance. In: Salvendy, G. (Ed),

Handbook of Human Factors and

Ergonomics, 2nd edition, Wiley & Sons, New York, pp. 1375 – 1408. Love, L., Johnson, C., 1997. Using diagrams to support the analysis of system failure and operator error. In: Thimbleby, H., O´Conaill, B., Thomas, P. (Eds), People and Computers XII: Proceedings of Human Computer Interaction´ 97, Springer Verlag, pp. 245-262. Luczak, H., 1997. Task Analysis. In: Salvendy, G. (Ed), Handbook of Human Factors and Ergonomics, 2nd edition, Wiley & Sons, New York, pp. 340-416. Micro SAINT. Micro Analysis & Design, Inc., 4900 Pearl E. Cir., Boulder, CO. 80301 USA. Oberquelle, H., 1987. Human machine interaction and Role - Function - Action nets. In: Brauer, W., Reisig, W., Rozenberg, G. (Eds.), Petri Nets: Applications and Relationships to Other Models of Concurrency, Advances in Petri Nets 1986, Part II, Lecture Notes in Computer Science, Vol. 255, Springer Verlag, Berlin, pp. 171-190. Petri, C.A., 1962. Kommunikation mit Automaten. PhD thesis, Shriften des IIM Nr.2, Institute fur Instrumentelle Mathematik, Bonn, Germany.

32

Ramaswamy, S., Valavanis, K., Barber, S., 1997.

Petri Net extensions for the

development of MIMO net models of automated manufacturing systems. Journal of Manufacturing Systems, 16, 175-191. Raposo, A.B., Magalhaes, L.P., Ricarte, I.L., 2000.

Petri Nets based coordination

mechanisms for multi-workflow environments. International Journal of Computer Systems

Science and Engineering.

15, 315-326.

Also available at:

http://www.dca.fee.unicamp.br/~alberto/pubs/IJCSSE/mechanisms/ .

Reutenauer, C., 1990.

The mathematics of Petri nets.

Prentice Hall International,

Englewood Cliffs, New Jersey. Sadiq, W., Orlowska, M.E., 2000. Analyzing process models using graph reduction techniques. Information Systems. 25, 117-134. Salimifard, K., Wright, M., 2001a. Petri-net based modeling of workflow systems: An overview. European Journal of Operation Research, 134, 218-230. Salimifard, K., Wright, M., 2001b. MORaD-net: A visual modeling language foe business processes.

International Workshop on New Models of Business:

Managerial Aspects and Enabling Technology, St. Petersburg, Russia, pp. 213-222. Schraaggen, J.M., Chipman, S.F., Shalin, V.L., (Eds.), 2000. Cognitive Task Analysis. Lawrence Erlbaum Associates, Mahwah, New Jersey. Shepherd, A., 1993. An approach to information requirements specification for process control tasks. Ergonomics, 36, 805-817. Shepherd, A., 1995. Task analysis. In: Monk, A.F. , Gilbert, N. (Eds.), Perspectives on HCI: Diverse Approaches. Academic Press, London, pp. 145-174. Shin, I., Levis, A., 1999.

Performance prediction model generator powered by

occurrence graph analyzer of Design/CPN. Proceedings of the 2nd Workshop on Practical Uses of Coloured Petri Nets and Design/CPN, Aarhus University, Denmark, pp.191-210.

33

Stanton, N., Young, M.S., 1999. A Guide to Methodology in Ergonomics. Taylor & Francis, London. Valavanis, K., 1990. On the hierarchical modeling analysis and simulation of flexible manufacturing systems with extended Petri Nets. IEEE Transactions on Systems, Man and Cybernetics, 20, 94-110. Wagenhals, L.W., Shin, I., Levis, A.H., 1998. Creating executable models of influence nets with Coloured Petri Nets.

International Journal of Software Tools for

Technology Transfer, 2, 168-181. Weingaertner, S.T., Levis, A.H., 1989.

Analysis of decision aiding in submarine

emergency decision making. Automatica, 25, 349-358. Wickens, C.D., Harwood, K., Segal, L., Tkalcevic, I., Sherman, B., 1988. TASKILLAN: A simulation to predict the validity of multiple resource models of aviation workload. Proceedings of the 32nd Annual Meeting of the Human Factors Society. Human Factors Society, Santa Monica, CA, pp. 168-172. WinCrew. Micro Analysis & Design, Inc., 4900 Pearl E. Cir., Boulder, CO. 80301 USA.

34

Captions for Figures and Tables

Figure 1. Building blocks for operator plans Figure 2. Sequential and discretionary plans of the classification scheme Figure 3. Choice, parallel, contingent and remedial plans (continued) Figure 4. A Coloured Petri Net (CPN) representation of the job assignment process Figure 5. A generic task model of a CPN-based task analysis Figure 6. A CPN representation of the planning process Figure 7.

A

sequence of tasks for goal 1 showing

choice, serial, and parallel

processing Figure 8.

A schematic of

the routine MANAGE_TASK comprising subroutines

TASK_ALLOCATION and TASK_ERROR Figure 9.

An HTA of the case study for making reference to the CPN based task

analysis (underlined text is not further described) Figure 10. A CPN representation of the assessment process in the case study Figure 11. A time trace of the workload of the two operators involved in the case study Table 1.

Identification codes, priorities, personnel and states of the tasks and goals

specified in the global reference array (Notepad) of the case study

Appendix I.

Global declaration of colors and variables for a Coloured Petri Net

representation of tasks Appendix II.

A sample of elementary routines and a specification of routines

ADJUST_SEQUENCE and ADJUST_WORKLOAD written in Standard ML. Appendix III. A specification of routines MANAGE_TASK, TASK_ALLOCATION and TASK_ERROR written in Standard ML.

35

AND – split (fork)

AND – join (synchronization)

OR – split (choice)

OR – join (merge)

Figure 1. Building blocks for operator plans

36

SEQUENTIAL PLANS

DISCRETIONARY PLANS

Fixed sequences: Do A and B in specified order

Discretionary inclusive plans: Do both A and B, in any order, or concurrently

A

B

A

T

Prioritized sequences: Do both A and B giving priority to A

A

B Discretionary exclusive plans: Do either A or B, in any order

secondary place

A secondary place

B Unordered sequences: Do both A and B in any order, but not concurrently

B Optional plans: Do either A or B, or both

A

A T

secondary place

B

[guard]

[* Guard is to prevent transition T from firing twice]

B Interleaved sequences: Start B before A is completed pause A1

A

T1

B

T2

C

T3

A2 @ + delay secondary place

B

N out of M joins: Do only n out of m tasks

Figure 2. Sequential and discretionary plans of the classification scheme

37

CHOICE PLANS

OTHER PLANS

Explicit choices: Choose A or B depending on rules

Parallel plans: Do A and B concurrently

A

x

A

[x=a] x

T1

B

T2 B

[x=b]

Deferred choices: Choose A or B depending on availability of triggers A and B

Contingent plans: Do A then do B when cued by x

A

Trigger A

B End A

A

Cue x

start End B

Remedial plans: Repeat A until x >= 10 then do B

B A

Trigger B

B [x>=10]

@ + delay wait Probabilistic choices: Do A or B according to probability 0.25

0.75

[x!w_max andalso state1done andalso state2done then (if load1>load2 then (p,s1,s2,0,delay) else (p,s1,s2,delay,0)) {* serial processing of tasks *} else (p,s1,s2,0,0) ) {*concurrent processing, by same operator *} else (p,s1,s2,0,0) {* concurrent processing by other operator *} end; fun adjust_workload (goal,task1,task2,L) = {* check if tasks have been done by the same person *} let val state1=look_state(goal,task1,L); val state2 =look_state(goal,task2,L); val hour1=look_hour(goal,task1,L); val hour2 =look_hour(goal,task2,L); val man1=look_man(goal,task1,L); val man2=look_man(goal,task2,L); in if (state1=done andalso state2=done andalso man1=man2) then (if hour1=hour2 then (man1,true ) {* concurrent processing by one operator *} else (man1,false)) {* serial processing by one operator *} else (man2,false) {* serial processing by other operator *} end;

Appendix III. A specification of routines MANAGE_TASK, TASK_ALLOCATION and TASK_ERROR written in Standard ML. fun manage_task(goal,task,load,delay,next_task,next_man,p,s) = {* perform a task *} let val L=read_data Notepad {* convert global array Notepad to a list L *} val (gid, tid, pri_g, pri_t,state,exact, exact_fin, guy_1,guy_2,hour)=nth(L,task-1) val pri =look_goal_pri(goal,L); {* look priority of goal *} val (x1,x2,x3,new_delay,new_man,new_exact,new_load,impact,probability) = !call_error; in {* goal x1, task x2, error type x3 *} if pri new:=(aborted,guy_1,exact,0,0) | {* omission *} 2 => new:=(done,guy_1,exact,0,new_delay) | {* reduced speed *} 3 => new:=(done,guy_1,new_exact,0,0) | {* reduced accuracy *} 4 => new:=(done,new_man,exact,0,0) | {* another person takes over *} 5 => new:=(done,guy_1,exact,new_load,0) | {* higher workload than expected*} 6 => new:=((state,guy_1,exact,0,0); {* side effect on other task *} call_off:=(goal,impact,1)) let val (condition,personnel,accuracy,workload,duration) = !new in call_error := (goal,task,0,0,guy_1,0,0,0,false); {* return error probability to zero *} update_all(gola,task,condition,accuracy,personnel,h+duration); addup_work(personnel,workload,h+duration); ({gid=goal,tid=task,man=person,state=condition}, {gid=goal,tid=next_task,man=next_man,go=true},duration) end end;

50