Synergizing Standard and Ad-Hoc Processes

3 downloads 5532 Views 212KB Size Report
Section 5 outlines the support services DYONIPOS provides for the process engineer. .... example, computer desktop, meeting room, or corridor). All these ...
Synergizing Standard and Ad-Hoc Processes Andreas S. Rath1 , Mark Kr¨ oll1 , Keith Andrews3 , Stefanie Lindstaedt1 , Michael Granitzer1 , and Klaus Tochtermann1,2 1 Know-Center Graz Inffeldgasse 21a/II, 8010 Graz, Austria {arath, mkroell, slind, mgrani, ktochter}@know-center.at 2 Knowledge Management Institute Graz University of Technology Inffeldgasse 21a/II, 8010 Graz, Austria 3 Institute for Information Systems and Computer Media (IICM) Graz University of Technology Inffeldgasse 16c, 8010 Graz, Austria [email protected]

Abstract. In a knowledge-intensive business environment, knowledge workers perform their tasks in highly creative ways. This essential freedom required by knowledge workers often conflicts with their organization’s need for standardization, control, and transparency. Within this context, the research project DYONIPOS aims to mitigate this contradiction by supporting the process engineer with insights into the process executer’s working behavior. These insights constitute the basis for balanced process modeling. DYONIPOS provides a process engineer support environment with advanced process modeling services, such as process visualization, standard process validation, and ad-hoc process analysis and optimization services. Keywords: process modeling, knowledge utilization, ad-hoc process mining, process engineer services, knowledge capturing, process visualization.

1

Introduction

In a rapidly changing world, organizations of all kinds strive for standardization, control, transparency, and quality assurance. Workflow Management Systems (WFMS) have become quite widespread to support the progress and development of organizations. It is generally accepted that these systems have made a significant contribution to increased productivity [8]. The key discriminating feature of WFMSs is the flexibility they provide to deal with changes [17]. This adaptability is especially required in a knowledge-intensive business environment, where workers perform their work in a creative way (as opposed to a routine way). Knowledge workers are guided by goals instead of tasks and prefer significant freedom in structuring their own activities [12]. However, by allowing knowledge workers freedom for creativity, organizations may decrease the possibility to standardize and control working procedures. This results in a U. Reimer and D. Karagiannis (Eds.): PAKM 2006, LNAI 4333, pp. 267–278, 2006. c Springer-Verlag Berlin Heidelberg 2006 

268

A.S. Rath et al.

dilemma of contradictory ambitions, between the organizational need for standardization on the one hand and the essential freedom needed by knowledge workers on the other hand. In the middle of this dilemma are the process engineers, who are the representatives of the organizational needs. Their task is to model the processes of organizations and to refine them if changes are required. In fulfilling their task they face following challenges (amongst others): – Information Gathering To model a process the process engineers need information about what kind of work is done and how work is done in the organization. A popular technique for collecting this information is interviewing key persons in the organization, such as project managers, regional directors, and team leaders. Data from WFMSs or pre-existing how to documents (for example, how to write a requirements document, how to start a project, how to organize a meeting, etc.) can be used as an information source. A typical problem which arises is that many varying descriptions of the same workflow are collected. The challenge for the process engineer is to identify the best information sources for retrieving workflow information. – Large Amount of Information Extensive collection of information from various sources is a good starting point for the process analysis step. In this step the collected information is aggregated and possible process descriptions are elaborated. The challenge for the process engineer is to extract process descriptions based on the collected information. – Defining Standard Processes Standard processes can be modeled in various process description languages such as BPML (Business Process Modeling Language), BPEL (Business Process Execution Language), UML (Unified Modeling Language), and XPDL (XML Process Description Language) and with various applications such as Microsoft Visio, Microsoft PowerPoint, ERP (Enterprise Resource Planning) Systems, and process modeling tools built into WFMSs. The challenge is to decide which modeling language to use for the process descriptions, because the descision has consequences for the further usage of the process descriptions in specific WFMS. – Process Change Detection Since the business environment changes continuously, organizations are requested to continually adapt their processes to new conditions. In addition to process changes arising from external factors, there are natural process deviations. Natural deviations from standard processes are also referred to as ad-hoc processes. These deviations can be a shortening, an extending, or more generally a structural change to a standard process. Ad-hoc processes happen when standard processes are not performed in the intended way. There can be several reasons for a deviation from a standard process, for example it is easier, more comfortable, or more efficient. The challenge for

Synergizing Standard and Ad-Hoc Processes

269

process engineers is to detect if and where deviations from standard processes occur and to adapt standard processes if necessary. – Ad-Hoc Process Capture Ad-hoc processes happen steadily in organizations. The first step for process engineers is detecting in which element of the process chain deviations occur. The next step is to capture the composition of the ad-hoc process itself. The information about the ad-hoc process can then be used in a refinement step by the process engineer. If an ad-hoc process outperforms the existing standard process (for example, in terms of time efficiency, information flow, or resource flow) the standard process can be modified accordingly or replaced by the the ad-hoc process. – Enhancing Ad-Hoc Processes to Standard Processes Ad-hoc processes are not necessarily deviations from standard processes. They can also arise from the composition of new tasks carried out by process executers. Information captured about ad-hoc processes can act as a valuable base for the process engineer when modeling new standard processes. Sometimes, an ad-hoc process can advance to become modeled as a standard process. – Modeling Knowledge-Intensive Work Knowledge work is described as work with a large amount of creative activity [12]. Creative activities are very hard to model in advance by a process engineer. Since knowledge workers often reuse their existing knowledge to manage the complexity of their work [2], reuse patterns and resources can be used as a starting point to model knowledge-intensive work. A common method of reuse is called templating, where past processes and their resources are used as templates for the knowledge worker’s current work [2]. These challenges lead us to the objective of this paper, which is the presentation of the process engineer support environment in the DYONIPOS (DYnamic ONtology based Integrated Process OptimiSation) research project. DYONIPOS aims to support the two crucial roles in a knowledge-intensive organization, the process executer and the process engineer, by synergizing the organizational need for standardization, control, and transparency (standard processes) with the essential need for creative freedom for knowledge workers. The approach of DYONIPOS incorporates the development of solutions based on automatic and semi-automatic knowledge management methods and technologies such as knowledge discovery, semantic systems, knowledge flow analysis, and process visualization. For a comprehensive overview of the DYONIPOS project see [12]. This paper is structured as follows: Section 2 discusses top-down and bottomup approaches to the challenges process engineers face in process modeling. A comparison of these approaches motivates the need for a new approach. The hybrid approach of DYONIPOS is presented in Section 3. Section 4 discusses the event, task, and process model (semantic pyramid) used by DYONIPOS. Section 5 outlines the support services DYONIPOS provides for the process engineer.

270

2

A.S. Rath et al.

Top-Down and Bottom-Up Approaches

The common process modeling approach, where processes are modeled manually based on available process data or information, is called the top-down approach. Data and information are usually obtained from interviews, existing WFMSs, observations during site visits, document inspection, or (if available) previous process descriptions. The various information sources and the retrieved data need to be structured and aligned manually by the process engineer. Based on the analysis of the collected information the process engineer models the processes. The choice of process modeling language and process description is based on the intended further usage of the process model. Supplying a specific WFMS with the process model is usually the next step after modeling, validation, and refinement of processes. WFMSs have become quite popular for managing complex organizational processes, but fail in supporting knowledge-intensive and agile processes [9]. The problem with this kind of process is that they cannot be modeled in advance. Further problems of WFMSs are their limited ability to deal with dynamic changes [15] to the implemented, static process models. Weakly-structured workflows address this insufficiency by suggesting lazy and late modeling or interleaving process modeling with process execution [16]. Detection of process changes is limited in standard WFMS, because refinement and deviations of standard workflows are usually not allowed and hence no workflow logs about the deviation exist. If process engineers want to validate the actuality of existing processes, a new round of time-consuming and costly information gathering has to be initiated. For the process engineer, it is also unclear when a process refinement has to take place, because in a WFMS there are no indicators for a process change. The contrasting approach to process modeling is the bottom-up approach, which means that the information originates from process executers instead of process engineers [7]. The bottom-up approach is also referred to as process mining [4,14,18]. In this approach, the process model can be derived from workflow, task, and/or event logs. In order to transform the monitored data stored in the logs into tasks, information retrieval, mining and monitoring techniques, and advanced algorithms [13] are needed. The advantages of this approach are the intensive data and information gathering possibilities and the continuous refinement and enhancement of the calculated processes as the number of cases increases. Event log mining [4] has the advantage of providing fine-grained data to the mining step in comparison to [13] where tasks from workflow logs are used as a basis. Event logs incorporate data about the executions of standard and ad-hoc processes and hence event log mining considers both types of processes when calculating the process model. On the other hand, there is no differentiation between standard and ad-hoc processes and hence a change in or a deviation from standard processes can not be detected, which is the same problem as in the top-down approach. Since remodeling, i.e. a recalculation of the process model, is done automatically, the generation of a new process model can be done easily.

Synergizing Standard and Ad-Hoc Processes

271

Table 1. A comparison of the top-down and bottom-up approaches to modeling business processes Challenges

Top-Down

Bottom-Up

Information gathering

Interviews, document sighting, organization visits, WFMS, old process descriptions.

WFMS logs, event and task logs.

Large amount of information

Manual extraction of processes

Algorithm based process creation based on logs.

Defining standard processes

Process engineers select appropriate tool and process description language (PDL) based on further usage requirements. Only based on manual observation, new information gathering step required. Only if directly observed by process engineer. No data available.

Only algorithm supported PDLs, automatic generation of a suggested process description

Process change detection Ad-hoc process capturing Enhancing ad-hoc processes to std. processes

Knowledge-intensive work modeling

Limited by process engineers inspections.

No comparison of standard and ad-hoc processes, i.e. no change detection. Completely stored in event logs. Suggestion for new standard processes based on a recalculation of the process model. Uses templates from event logs, limited when using WFMS logs.

Taken individually, neither the top-down nor the bottom-up approaches provide an adequate solution to the challenges a process engineer faces in modeling processes. A comparison of the top-down and bottom-up approach is given in Table 1.

3

The DYONIPOS Project

The DYONIPOS (DYnamic ONtology based Integrated Process OptimiSation) project strives to ameliorate the dilemma of the organizational need for standardization and control on the one hand and the day to day creative freedom needed by a knowledge worker on the other hand. The research project DYONIPOS addresses this dilemma by following a hybrid approach, i.e. a combination of bottom-up and top-down approach. The left part of Figure 1 shows the hybrid approach of DYONIPOS. The inductive, bottom-up approach uses monitored interactions of the process executer for further analysis. The top-down approach in DYONIPOS is represented by the consideration of standard processes, for example those originating from WFMSs.

272

A.S. Rath et al.

Fig. 1. The scope of the DYONIPOS project

In the business process environment, DYONIPOS will support both the process executer and the process engineer. Process executers are provided with support to find, perform, and record ad hoc processes within their work environment such that the ad hoc process retrieval, application, and definition take place within the executer’s current work context. Hence, based on recent interaction with the system, guidance is featured through the daily work which is enhanced by providing supportive resources. Process engineers will be supported in reviewing and analyzing recorded ad-hoc processes. Standard processes can be validated by analyzing process instance frequencies and compared to newly created ad-hoc processes. If ad-hoc processes outperform standardized processes DYONIPOS provides means to enhance ad-hoc to standard processes.

4

Semantic Pyramid

In this section we introduce the semantic pyramid which describes the continuous evolution of contextual information through the different, semantic layers. The pyramid is illustrated in Figure 2 starting at the bottom with events that are executed by one knowledge worker and ending with processes where many knowledge workers can be involved. Each level of granularity (events, event blocks, tasks and processes) provides a different representation of the data regarding the semantic quality. By passing through the layers and ending at the process level the semantic quality is permanently enhanced. In the following it is described how the information gathering concerning each individual layer is carried out. The steps required to collect information about the process executer’s actions and thus obtain the context, start with the recording of all events, i.e. the entire user interaction. Events belonging to a logical unit are grouped together into event blocks. Event blocks form semantic sets and are eventually assigned to the knowledge worker’s tasks. Hence, a task is represented as a sequence of event blocks. A task is modeled as a large graph containing event blocks as nodes.

Synergizing Standard and Ad-Hoc Processes

273

Fig. 2. The semantic pyramid comprises four layers: events, event blocks, tasks and processes. Starting from the bottom, each level of the semantic pyramid is obtained by sensibly grouping together elements from the preceding layer. Moving upwards through the layers, the semantic quality is continually enhanced.

Each level of granularity (events, event blocks, tasks, and processes) provides a different representation of the data in terms of semantic quality (see Figure 2). Moving through the layers from bottom to top, semantic quality is continually enhanced. 4.1

Events

The data on which DYONIPOS operates consists of the monitored interactions between the user and the computer. Key logging software, referred to as the event logger, records all events which occur on the user’s computer. Events are user inputs, such as mouse movements, mouse clicks, starting a program, creating a folder, or opening a file. A similar approach is described by Fenstermacher [4]. All recorded events are stored in the event log. To ensure security and privacy, the user has the ability to modify the event log and to delete events. Our work in this area builds upon the results of the MISTRAL project [11], which aims to extract semantic concepts from text, audio, and video data. It is even conceivable to incorporate talks amongst knowledge workers into the DYONIPOS project in order to enrich the individual user profile. The TaskTracer project [2] follows a similar approach, where telephone conversations are recorded and further processed by means of speech to text applications. 4.2

Event Blocks

The typical knowledge worker produces a considerable amount of data in the course of a typical day’s work. Since the event logger monitors fairly low-level events, a huge amount of data is recorded. To cope with the sheer amount of data, three strategies are used: filtering, relation analysis and aggregation. Filtering

274

A.S. Rath et al.

involves removing irrelevant data from the event log. Relation analysis deals with finding dependances and similarities. Aggregation involves the grouping of sequences of events in the event log into event blocks. Event blocks are formed using predefined static rules, which map a set of events to an event block. For example: The user moves the mouse over a program icon, double clicks the icon, and the system starts a program. This set of events can be combined to an event block called starting a program. An interesting open question here is to what extent it is possible to automatically find a mapping based on the data in the event log, i.e. automatically generating mapping rules. Since not all events and consequently not all event blocks of a knowledge worker’s daily work can be captured automatically, the user has the ability to manually add event blocks. Event blocks of this kind might be a meeting appointment, a conversation with a colleague, or signing a report. The knowledge worker’s privacy is ensured by law. Thus, a natural dilemma arises when trying to gather as much information as possible about a worker’s interactions with the system while staying within the law. No user interaction remains hidden from the system. Hence, any data to be stored needs explicit permission from the user. Moreover, event blocks are transfered into an abstract form containing the essential data in encrypted form. The level of encryption is tunable and could involve term vector representation or hash coding. 4.3

Tasks

Event blocks are combined into tasks by grouping together similar event blocks into semantic sets. Thus one resulting set represents one task of the process executer. However, the knowledge worker is not bound to remain within a given task from start to finish. Switching between tasks might be necessary or be more efficient. In other words, event blocks may be interleaved and have to be grouped together according to their topic. By analyzing the event blocks and which documents were written or read, event blocks exhibiting similar content are identified. The degree of similarity indicates the affiliation to certain event block sets. Standard text mining algorithms provide us with the means to extract keywords and compare textual contents. A high-quality semantic description of the process executer’s tasks is thus obtained. In DYONIPOS we focus on the knowledge worker’s (user’s) context. The user context describes who the user is (organizational context ), what the user does (work context ), how the user does it (behavioral context ), with whom the user collaborates (social context ), and which resources the user uses (document context ). Further contexts addressed in DYONIPOS are the process context, describing the position of the knowledge worker within a business process, and the environmental context, capturing the location of the knowledge worker (for example, computer desktop, meeting room, or corridor). All these different contexts are used to provide highly supportive information for the knowledge worker. A further application of the contextual information is to identify different and similar tasks. The idea here is to analyze the user’s context for context switches, which may indicate switching from one task to

Synergizing Standard and Ad-Hoc Processes

275

another. Context switches could potentially be used as indicators for an update of any supportive information. Since contextual retrieval is application specific [5], further research has to be done in applying contextual retrieval in the area of knowledge-intensive business environments. 4.4

Processes

A process is formed by aggregating tasks performed by a number of different knowledge workers using process mining. Instead of only one person, several persons performing well-defined steps within the process are involved. The different contexts described in Section 4.3 are merged, allowing crucial insight into the semantics of business processes. Based on the derived information, DYONIPOS provides a number of services to support the knowledge worker.

5

Process Engineer Support

In this section the services DYONIPOS provides for the process engineer are introduced. DYONIPOS visualizes both ad-hoc and standard processes and enables the process engineer to review and analyze recognized ad-hoc processes and compare them to pre-defined standard processes. Process visualization serves as crucial preprocessing step for further analysis and optimization. Provided sevices are listed in the corresponding categories. 5.1

Visualization

Both ad-hoc and standard processes are modeled in DYONIPOS as RDF graphs. These graphs can be visualized using standard techniques from the field of graph drawing [1], such as layered Sugiyama-style graph drawing [10] and force-directed placement [3]. For DYONIPOS, the most recent versions of these algorithms will be used [6]. Processes and subprocesses can be displayed at different levels of granularity. Hence, the knowledge worker is not bound to be an expert on the whole process: it is possible to select only that part of the process within the worker’s scope of knowledge. Color coding can be used to distinguish between old and new, and between standardized and ad-hoc processes. The frequency of paths traversed by process executors is denoted by the thickness of lines connecting parts of processes. Well-beaten paths can be easily identified, simplifying the task of selecting processes for further analysis. A further module will allow visual comparison of two graphs, for example an ad-hoc process of interest and a similar standardized process. – Multiple Levels of Granularity Entire processes and parts of processes, i.e. subprocesses, can be displayed individually allowing different views of granularity. – Frequent Process Paths Many newly created processes occur only with low frequency, in other words, processes that can be disregarded. However, the process engineer is interested in ad-hoc processes that happen more frequently, hence, representing

276

A.S. Rath et al.

a potential alternative to standard processes. Exactly these processes can be visualized. Beaten paths can be easily identified thus lightening the task of selecting processes for further analysis. 5.2

Analysis

The analysis of subprocesses and entire processes is empowered by the corresponding visualization. The process view can be adapted according to the frequency with which ad-hoc processes occur. Low frequency processes are usually not taken into account by the process engineer and are faded out. The ad-hoc processes in question can thus be easily detected and can be partially or entirely compared to traditional processes. DYONIPOS enables the process engineer to simulate standardized and ad-hoc processes by means of petri nets. Knowledge flows can be pursued, critical paths can be identified, and time delays are indicated. It is possible to identify inefficient procedures such as processes of varying duration which perform the same task. Bottlenecks can be detected by looking for parallel sequences interrupted by sequential steps. – Similarity Measuring Within this service two processes can be compared to each other. Different levels of granularity, i.e., which subprocesses or tasks they have in common, allow insights into levels of the organizational structures. – Standard Process Validation Standard processes are validated by comparing them to new, ad-hoc processes. The degree of accordance of frequent process paths could serve as an indicator for the acceptance of the standard process amongst the employees. Standard processes that have few paths in common are predestined to be further analysed. 5.3

Optimization

Once inefficient procedures have been detected, the process engineer can take steps to improve them. The most efficient process can be used as a model for a new standardized process. Bottlenecks can be defused by replacing sequential steps by parallel ones. – Process Deviation Analysis Changes in standard process flows can be detected. These deviations can be analyzed regarding occurrence frequency, time efficiency and the order of tasks. The usage of the analysis information can lead to the defusion of bottlenecks by replacing sequential steps in a process by parallel ones and to an increase concerning time efficiency by comparing various deviations. Hence, the corresponding standard process can be enhanced. – Suggesting Processes for Standardization Ad-hoc processes can be analyzed and compared to standardized processes by making usage of above mentioned services. Such a newly created process is suggested for standardization if it accurately defines an organizational procedure so far neglected or it it outperforms an existing traditional process.

Synergizing Standard and Ad-Hoc Processes

6

277

Concluding Remarks

In this paper we presented the Process Engineer Support Environment of the research project DYONIPOS. In this context we discussed the DYONIPOS approach to the challenges a process engineer faces and the supported services. Currently we are exploring event logging software applications, graph mining techniques and semantic technologies. We are planning to investigate task and process mining techniques.

Acknowledgements The project results have been developed in the DYONIPOS project (DYnamic ONtology based Integrated Process OptimiSation). DYONIPOS is financed by the Austrian Research Promotion Agency (http://www.ffg.at) within the strategic objective FIT-IT under the project contract number 810804/9338. The Know-Center is funded by the Austrian Competence Center program Kplus under the auspices of the Austrian Ministry of Transport, Innovation and Technology (http://www.ffg.at), by the State of Styria and by the City of Graz.

References 1. Giuseppe di Battista, Peter Eades, Roberto Tamassia, and Ioannis G. Tollis. Graph Drawing: Algorithms for the Visualization of Graphs. Prentice Hall, New Jersey, 1999. 2. Anton N. Dragunov, Thomas G. Dietterich, Kevin Johnsrude, Matthew McLaughlin, Lida Li, and Jonathan L. Herlocker. Tasktracer: a desktop environment to support multi-tasking knowledge workers. In IUI ’05: Proceedings of the 10th international conference on Intelligent user interfaces, pages 75–82, New York, NY, USA, 2005. ACM Press. 3. Peter Eades. A heuristic for graph drawing. Congressus Numerantium, 42:149–160, 1984. 4. Kurt D. Fenstermacher. Revealed Processes in Knowledge Management. In Professional Knowledge Management - Experiences and Visions, Contributions to the 3rd Conference Professional Knowledge Management - Experiences and Visions, April 10-13, 2005, Kaiserslautern, Germany, pages 443–454, 2005. 5. Norbert Fuhr. Information Retrieval — From Information Access to Contextual Retrieval. In M. Eibl, Ch. Wolff, and Ch. Womser-Hacker, editors, Designing Information Systems. Festschrift f¨ ur J¨ urgen Krause, pages 47–57. 2005. 6. Fabien Jourdan and Guy Melancon. Multiscale hybrid MDS. In Proc. Eighth International Conference on Information Visualisation (IV’04), pages 388–393, London, UK, July 2004. IEEE. 7. U. Riss, A. Rickayzen, H. Maus, and W. van der Aalst. Challenges for Business Process and Task Management. volume 0, pages 77–100, 2005. 8. Uwe Riss. Knowledge, Action, and Context: A Process View on Knowledge Management. In Professional Knowledge Management - Experiences and Visions, Contributions to the 3rd Conference Professional Knowledge Management - Experiences and Visions, 2005, Kaiserslautern, Germany, pages 555–558, 2005.

278

A.S. Rath et al.

9. S. Schwarz, A. Abecker, H. Maus, and M. Sintek. Anforderungen an die WorkflowUnterst¨ utzung f¨ ur wissensintensive Gesch¨ aftsprozesse. In WM 01, Baden-Baden, Germany, 2001. 10. Kozo Sugiyama, Shojiro Tagawa, and Mitsuhiko Toda. Methods for visual understanding of hierarchical system structures. IEEE Transactions on Systems, Man, and Cybernetics, 11(2):109–125, February 1981. 11. K. Tochtermann, M. Granitzer, V. Sabol, and W. Klieber. MISTRAL: Serviceorientierte Cross-Media Techniken zur Extraktion von Semantik aus Multimedia Daten und deren Anwendung. In Proceedings Semantics 2005, Reich S., G¨ untner G., Pellegrini T., Wahler A. (Hrsg.) Trauner Verlag, 2005. 12. Granitzer M. Lindstaedt S Tochtermann K., Reisinger D. Integrating Ad Hoc Processes and Standard Processes in Public Administrations. In Proceedings of the OCG eGovernment Conference, Linz (Austria), 2006. 13. W. van der Aalst, A. Weijters, and L. Maruster. Workflow Mining: Discovering Process Models from Event Logs, 2003. 14. W. van der Aalst and A. J. M. M. Weijters. Process mining: a research agenda. Comput. Ind., 53(3):231–244, 2004. 15. W. van der Aalst and Mathias Weske. Case handling: a new paradigm for business process support. Data Knowl. Eng., 53(2):129–162, 2005. 16. L. van Elst, F. R. Aschoff, A. Bernardi, H. Maus, and S. Schwarz. Weaklystructured workflows for knowledge-intensive tasks: An experimental evaluation. 17. G. De Michelis W. van der Aalst and C.A. Ellis. Workflow management: Netbased concepts, models, techniques, and tools. In Computing Science Report 98/7, Eindhoven,The Netherlands, 1998. Eindhoven University of Technology. 18. Lijie Wen, Jianmq Wang, Zhe Wang, and Jiaguang Sun. A Novel Approach for Process Mining Based on Event Types. 2004.