workflow management tasks for which current WFMS offer few solutions. This class is .... place: the call center can be informed how to provide a quick fix, and the ...
Workflow Support for Failure Management in Federated Organizations Ralf Klamma, Peter Peters, Matthias Jarke Informatik V (Information Systems), RWTH Aachen email: {klamma|peters|jarke}@informatik.rwth-aachen.de Abstract Failure management for complex products (such as production machinery) exemplifies a class of reactive workflow management tasks for which current WFMS offer few solutions. This class is characterized by great variation of tasks and cooperation patterns, the need to bring rich context knowledge to many different kinds of workplaces, and the need to interlink effective workflow execution with continuous organizational learning. In the German FOQUS project, we have prototyped and evaluated a WFMS architecture which addresses these problems through (a) the encapsulation of problem context in electronic circulation folders; (b) a semantic trading mechanism managing the flow of such folders in a highly flexible manner; (c) a concept for processintegrated workplaces in which the necessary information is made effectively available in the usual working environment of the different kinds of users; (d) metamodeling techniques to support change in this environment.
1. Introduction Workflow models and workflow management systems (WFMS) are determined by the kinds of tasks they are intended to support. Early WFMS, e.g. FlowMark by IBM, CSE/ WorkFlow® by CSE, and WorkParty by SNI, pursue the activity-oriented aspect of workflow management [8, 29, 18] and are best suited to support long, standardized task chains. They have been criticized because they tend to neglect the flexible interaction between agents for exchanging complex structured (possibly multimedia) information. Two alternative views have emerged. Document-oriented WFMS, e.g. DM/Workflow by Intergraph or OPEN/workflow by Wang Laboratories, Inc., concentrate on the management of complex structured information in a predefined homogeneous setting for special purposes. Without regarding special document processing tasks, this view reflects the situation in most companies. Information is produced and stored, but may never reach interested persons, because there are no defined information flows. Service-oriented workflows, such as the Coordinator by Action Technologies Inc., emphasize the need for closing
the contractual loop between customers and providers of services [15, 26]. Their main advantage is an explicit modeling of the coordination relationships between agents involved. Their main weakness is that they focus on coordination only and do not directly address the knowledge context created and exchanged during workflows. Like activity-oriented workflows, but in contrast to the document-oriented ones, they focus on short-term aspects rather than on the long-term creation and management of organizational knowledge. In this paper, we investigate an extension to serviceoriented workflows which could be briefly characterized as highly flexible reactive workflow systems with embedded organizational learning support. The application domain we studied for such workflows is failure management for complex technical products, such as those used in production machinery. In section 2, we characterize this domain and its workflow management requirements based on experiences in an interdisciplinary project with engineering design researchers and industry, the FOQUS project. We then present an architecture that has been developed and evaluated within the project (section 3). The main ingredients of this architecture include: (a) the use of electronic folders as a movable encapsulation mechanism for task-related context knowledge; (b) a "semantic trading" mechanism which serves both as a workflow engine and as a broker helping the acquisition of additional possibly necessary context and contact information; and (c) a "bridge" component which enables an integration of heterogeneous workplaces into the workflow process defined in the trader. All of these components need an infrastructure of sharable and extensible background knowledge in order to function; this infrastructure is encoded in meta models described in section 4. Section 5 completes the paper with some observations from the experimental application of this technology in manufacturing organizations and outlines future work.
2. Failure Management as a Business Process In this section, we provide a characterization of failure management (FM) and show why traditional WFMS fail for this setting. We define some requirements for more
successful WFMS and introduce the so-called "principle of escalation" that we later use as a guideline for our design.
2.1 The importance of failure management Global markets with informed customers cause new competitive pressure. Manufacturers face more and more competitors whose products have comparable characteristics. In particular, small and medium-sized organizations with a narrow product range but high market share can only evade into new markets if they continuously improve quality and reduce time-to-market as well as costs [28, 22]. An effective way to reach both goals is to reduce faults or failures throughout the product life cycle. According to [5], 90% of the customers unsatisfied with product quality will avoid that product in the future. Every failure beyond the acceptable limit of the market leader leads to sales reductions between 3 and 4%. Despite the TQM movement, the potential of this insight is still significant: surveys performed in the FOQUS project [7, 24] have shown that 60% of the errors made in production lines are repeat errors, causing a loss of 10% of personnel and machine usage capacity. Fault-free production is an ideal often not achievable in time and cost-constrained production lines. Thus, not only product quality but also service quality are important for customer satisfaction. Reaction time is the most significant factor; our surveys pointed out that the typical complaint processing duration is nine to ten days (80% of responding manufacturers) and that on average three departments (ten departments) are affected by complaint processing. Only the combination of both requirements, i.e. reduction of repeat faults in production and increase of service quality, will satisfy customers, thus strengthening the ties between customers and enterprises. Both requirements demand the design of reactive and preventive management processes based on a quality loop within the companies [22], improving capture and exchange of knowledge about failures. Failure management (FM) is the modeling, enacting and maintaining of reactive and preventive processes. It must be investigated in a holistic way, considering the organizational, personnel and technical aspects. An example will illustrate the complexity of the task. When a complaint by a customer that his newly bought gear unit loses oil is reported to the manufacturer's call center, and the call center cannot give direct advice a service engineer drives to the customer. He detects the fault (sealing rings are not assembled correctly) and must take measures to correct it (e.g. exchange of gear unit). For preventing further failure occurrences, failure handling is passed on to the departments which are responsible for the correction, due to his assumption about the failure cause. Every department involved needs to
look for causes and to take corrective measures whose success is reported back to the process owners. After several assumptions and trials the cause is finally unveiled : the failure was caused in product assembly planning, propagated to product assembly and not detected by quality control because test routines were not applied effectively. Two types of organizational learning can take place: the call center can be informed how to provide a quick fix, and the product assembly can avoid the problem in future product versions.
2.2 WFMS requirements for failure management From the information technology point of view, the example raises several challenges. Since the product life cycle is distributed in time and space, faults detected in a late stage of the product life cycle phases are not necessarily produced there. Therefore, the process of failure management potentially involves all departments taking part in the product life cycle. -- but tasks must be escalated in a targeted, problem-driven manner, otherwise information overload will be the result. Inter- and intra-organizational barriers have to be bridged by workflows and information flows: 1. Informational barriers: For example, the information available for the assembly planning agent is not available for the service engineer at the customer site. 2. Conceptual (or knowledge) barriers: For example, the knowledge about assembly processes for gear units is not available to the CAD engineer. 3. Technological barriers: For example, the service engineer has no access to the enterprise networks, thus no direct exchange of electronically stored data is possible. WFMS seem to provide a promising basis for tackling the problems that result from the named barriers. However, most systems have been developed and implemented based on the following assumptions: 1. The workflows established are quite stable and easy to maintain. WFMS work best if well-documented business processes are established and can be supported electronically. 2. A detailed and consistent business process model exists which is understood by all people involved in the project. Some WFMS use reference models for special application domains which can be customized to individual enterprise needs. 3. The agents using the systems have comparable skills. Because of the overall business process model, WFMS support is given to all agents by the same means. 4. WFMS work together with existing applications and solutions in the application area.
These assumptions cause several problems in a distributed and flexible area like FM. 1. FM processes are not stable but change frequently. Experiences with WFMS show that they are difficult to adapt to process changes, thus forcing “cow paths” [9] to be established. Massive reengineering of business models is still an open problem [2]. 2. On a detailed workflow level FM processes are not well-defined and process ownership belongs to the departments involved in the process. Therefore, monolithic and complex business process models which must be understood by all agents do not make sense for FM. WFMS using them do not reflect the federated organization context of distributed or virtual enterprises [1]. 3. Failure processes are often handled in an ad-hoc manner and spread over several departments, where agents with different skills do their day-to-day work using different work environments. Therefore, taskand person-specific support by a WFMS is needed. 4. In a heterogeneous system, there is no integration at all because of the variety of systems. WFMS used here rely on simple concepts like task lists, email or PERT charts. Many studies about implementing workflow concepts in enterprises address problem similar to those stated above [16, 3]. These problems raise certain requirements to be fulfilled by WFMS support for flexible, distributed FM processes: 1. Flexible mechanisms for modeling, enacting and maintaining workflows and information flows are needed. New variants of products cause frequently new types of failures. Thus new or changed workflows and information flows emerge in which existing knowledge has to be applied. 2. Workflow management support is not only based on organizational needs but also on the interaction and communication needs of the involved agents. The simultaneous needs for exchanging data and bridging existing culture, language, and system barriers create uncertainty and equivocality [4]. Thus, the possibility to interact with each other to coordinate work and exchange relevant information must be provided by a FM system. 3. For users, tool support must be given on the right level of usage. Therefore, WFMS have to embed and enrich existing workplaces. This means the flexible integration of existing tools with the WFMS by providing several levels of information and service exchange support. Furthermore, collaboration and information exchange tools, like ad hoc querying, multimedia, and groupware tools, must be made available [20, 24].
4. The management and exchange of complex structured information produced by legacy applications, e.g. CAD drawings, NC programs, catalogs stored in heterogeneous data sources, must be facilitated.
2.3 The principle of escalation Agents processing failures follow a typical schema. After detecting the failure, they try to find a cause. If they find one, and if it is possible to correct the failure ad hoc, they will do so. If agents are not able to find a cause or if a local problem solution is not possible, they try to report the case to another agent, who is skilled enough and responsible. But perhaps reporting a failure is costly, displeasing, or there is no knowledge about responsibilities. If a failure must be processed in different departments in order to analyze the problem and define a measure, two problems occur. Either there is no allocation of responsibility causing delays in processing and reducing the effects of failure elimination, or multiple defined responsibilities cause mutual obstruction in processing. In order to reduce the knowledge and responsibility problems, FOQUS proposed the escalation principle for interdepartmental processes with the aim to • enable a direct processing of failures, • systematically detect spheres of competence, and • give structured support in processing the failure Escalation describes a mechanism enabling the processing of failures and the exchange of knowledge between spheres of competence. The elements of the escalation principle are shown in Figure 1. Spheres of competence describe workplaces or departments in the distributed product life cycle, e.g. the hot-line workplace, the service department, and the assembly planning department. These workplaces or departments perform processes and contribute to the whole FM process. Micro processes describe actions to be performed in one sphere of competence. There are three steps, structurally equal for every area but different in implementation depending on tasks and skills of agents: Sphere of competence Correct
Capture
Sphere of competence
Analyze Correct
Sphere of competence
Escalate failure
Capture Analyze
Correct
Escalate failure
Capture Analyze
Failure detection
Figure 1: Escalation principle
up data and processes to extract models of processes, information sources and their contents.
Macro processes describe a directed graph of escalation steps. With every escalation step the failure is passed to one or more spheres of competence. The conditions for escalation are negotiated between the agents in the spheres. With these elements it is possible to describe micro processes of spheres and to assemble them in a flexible manner. Differences in culture, language and systems can be encapsulated in the spheres and negotiation between agents is driven by the escalation principle. Negotiation across spheres can be founded on well-defined criteria described in contracts, as in the language-action approach [26]
On the conceptual level, internal integration means documented models with open interfaces and uniform failure data models. Data models gained from reverse engineering for external integration have been described by interface definition for special purposes, e.g. automatic gathering of CAQ data for multidimensional coordination measuring instruments, or by means of a comprehensive product, process and machine model [23]. The context of a failure is helpful for both analyzing and classifying the failure. Every failure context can be described by a triple (product, process, machine). The coupling with the failure data model has been reached by means of a data dictionary. Finally, processes and information have been integrated in electronic circulation folders. On a technical level, internal integration means integration of tools at one workplace (sphere of competence) and integration of workplaces in a distributed, interdepartmental environment. The core idea of external workplace integration is the introduction of a central instance, the Trader system, consisting of a repository with defined models [12] and three components for folder, workflow, and query management in a heterogeneous system world. Bridges make the traded information available to the different kinds of individual workplaces in a customized manner.
3. The FOQUS WFMS
3.1 Electronic circulation folders
The escalation principle suggests two goals that need to be accomplished by a suitable architecture for failure management:
To integrate workflows and information flows, we employ the electronic circulation folder metaphor to carry context around with a workflow. The approach is described best as an object migration view on workflow management [16]. The core idea is, that the document includes the work to do. Informational, conceptual, and technical barriers are bridged by the management of folders and their linkage to the Trader system. All the information needed for processing a failure is stored in folders or linked to them. Folders capture all the information gathered in the process, thus creating temporary knowledge bases available to all agents processing failures. Folders are further given direct access to information sources in the enterprise and to tools specific to the workplaces. Also, negotiation protocols for defined workflows as well as mechanisms for ad hoc escalation are linked to folders.
1. Capturing the failure: the process of gathering and documenting available data. 2. Analyzing the failure: the process of interpreting available information, i.e. determining possible causes and defining applicable measures. If no final solution is applicable or the agent presumes far-reaching consequences, the failure can be escalated. 3. Correcting the fai lure: the process of defining, performing and tracing measures for failure treatment. Success of measures is communicated to agents involved in the escalation.
1. Integration within the FM system (internal integration): Methods, tools and knowledge have to be integrated in micro and macro processes. The strategy here is to integrate top-down based on cooperative modeling of processes and data. Tools have to be integrated in spheres of competence, and spheres of competence have to be integrated to realize distributed FM processes. 2. Integration into the heterogeneous organizational environment (external integration): Existing manufacturing and service processes, information sources, and legacy applications have to be considered. The strategy here is to reengineer bottom-
A folder is created every time a failure is detected. All information related to the case is collected in the folder. If a local correction is not possible, the folder knows from Segment Project management Failure data Workplace documents Agents Tools Versions & history
The segment history contains link chains concerning history and version management of folders needed for process controlling and knowledge capturing. Process
Purpose Folder life cycle control Data core for failures Documents specific to work place Linkage between roles and persons Linkage for available tools History of escalation and split control
Support macro/micro micro micro macro/micro micro macro/micro
Data original original link original original link
Persistence persistent persistent temporal temporal temporal persistent
Table 1: Folder segments defined models to which agents it needs to be escalated. Because folders can be escalated to different agents at the same time, each escalation creates new versions. But version management is simplified by the fact that information cannot be deleted from folders. Formally and on the user interface, a folder is an object consisting of six segments which provide access to workflows, failure data, documents, system information, and version data. An overview of the segments is given in Table 1. An example of a segment is given in Figure 2. A multimedia workplace document is related to the description of a failure (all readable information in German) The segment project management contains persistent folder data for controlling macro processes as well as a portfolio for local task management at the work place. The data will be updated each time the folder escalates. The data describe folder routing, schedules, cost estimation plans, time the folder spend at workplaces, and so on. The segment failure data primarily consists of persistent folder data containing the descriptions, context, analysis and classification of errors, but also technological data concerning storage and representation of failure data in external data sources. The segment workplace documentation links to workplace-specific documents. Documentation contains textual and multimedia material for describing, analyzing, and correcting the failure (e.g. CAD drawings, machine photographs, products parts, sounds, etc.). Furthermore, catalogues for federated queries are linked to this segment to give the user (specific to his role) the possibility to query the failure in connected to data sources [12]. The segment agents contains temporal assignments from persons to roles in the FM system. The segments refers to the concept agent in the language model. The segment tools consists of temporal assignments for applications specific to workplace and role. These tools are linked to the folder based on experience. Additional tool linkage on demand is also possible by querying connected data sources. The segment refers to the concept tool in the language model.
history is also needed for documentation and monitoring of the performed escalation. For guaranteeing completeness and consistence of folders, the content of segments grows monotonously, so that the deletion of process persistent material is not possible.
Figure 2: Segment workplace documents
3.2 Folder management tasks Workflow management organizes the life cycle of a folder. Primary goals are (1) to guarantee consistency and transparency in macro processes each time the folder is transferred and (2) to support the agent processing the folder at the workplace with non-local information and services. For (1) tasks for workflow management are defined as follows: 1. Folder transfer. Primary copies are at the workplaces, where the failure is actually processed. Secondary copies are left on each workplace in the escalation graph until the folder is closed. These copies are updated after every change in persistent folder segments, thus giving feedback to all agents in the escalation graph. For the FM system, the process of transferring the folder consists of following steps:
• Find defined sphere of competence for the transfer. If no responsible agent is defined for an escalation step there is a predefined agent, the process owner, who is responsible for the folder by default. This agent will then be contacted by the system. The process owner then decides if there will be an ad hoc escalation or if the treatment of the failure is stopped, e.g. for economical reasons. • Establish contact between spheres (either direct or e.g. by email). • Enable negotiation process after direct contact. For every pair of spheres individual negotiation protocols can be defined. • Transfer the folder after successful negotiation, contact the process owner otherwise. Before the transfer, temporal links have to be substituted and new versions of the folder are created for the spheres accepting the folder, constituting a directed graph. The folder life cycle can be traced with this graph. • Gather the receipts of transfer from both spheres. 2. Folder trace. Every person using the FM system can monitor the escalation graphs. The grade of monitoring is defined by the temporary role of the persons. • Every person involved in the process can query the progress of escalation from the history segment (the folder graph). Feedback is given automatically to every person involved if the folder is closed. • The process owner can monitor folders and stop escalation from the segment project management any time. If the user is requesting information not contained in the folder but in external data sources, the FM system mediates those data to the user (2). Because the FM system treats external data sources and workplaces the same way, users can also query other wo rkplaces thus supporting communication in failure processing.
3.3 Folder circulation via traders Despite the decentralized development process, the FM system currently manage the acquired information under central control to avoid inconsistencies and to provide an appropriate version control of folders during their life cycle. These requirements led to an integration approach that uses a repository as a tool that provides sharing and integration of knowledge and the ability to maintain the consistency of objects, relations and their meanings in the organizational context. The repository is responsible for the management and exchange of complex structured information.
The overall architecture of the FM system is given in Figure 3. The Trader system in the middle of the figure is connected via a network or serial lines to enterprise data sources and workplaces. The Trader system consists of a central repository, a query manager, a workflow engine, a folder manager, and interfaces to external and internal sources. The Trader repository consists of four components for the management integration tasks: Heterogeneous information sources Network
Federated externe Kommunikation query interface Query manager
External integration Internal integration
Workflow engine Trader database
Folder manager
DB-Kommunikation database communication interface
Folder management with work places
Workflow management with work places Network
Work places
Figure 3: Integrated Trader architecture 1. In the external knowledge component the federated schemata, functions for querying the federated databases, and triggers are stored. The folder manager uses this knowledge for the segments "project management” and "workplace documents”. The query manager uses the knowledge for answering predefined or ad hoc queries built with special tools [20]. The workflow engine uses defined triggers for initiating database actions. 2. At the core of the internal knowledge component lies the uniform failure data model and the escalation models for enacting and tracing measures. The failure data model allows queries on distant failure databases. The folder manager uses the information for the segments "project management", "failure data", and "history". The query manager uses knowledge for routing queries to workplaces. The workflow engine uses the escalation models for handing down folders. 3. The third component consists of agent and tool models. The agent model defines spheres of competence and the tool model contains knowledge about available tools. The folder manager uses that knowledge for the segments "agents" and "tools". The workflow engine gets the information which person is linked to an agent on a certain workplace. 4. The component communication knowledge consists of negotiation protocols. Possible communication and negotiation patterns between workplaces as well as protocols for querying external data sources are specified by statecharts [10]. The query manager uses the protocols for automatic interaction with
databases. The workflow engine uses the knowledge for enacting negotiation processes.
3.4 Workplace integration via bridges A tool called Bridge is responsible for accepting and handing down folders as well as for tool integration tasks. A Bridge has a local database for storing folders and tool data. The Bridge can read and write on every segment of the folders stored in the local database For the Trader system the workplaces are federated information systems, too. All Trader tools can access workplaces via defined interfaces and communication protocols. The Bridge (Figure 4) is communicating with the Trader system via the database communication interface. Acceptance and hand down of folders database communication interface Folder management
Tool management
Task Management
User interface Tool control
Folder data exchange
tool feedback
Werkzeug Werkzeug Werkzeug Tool
task exchange
Werkzeugschnittstellen Werkzeugschnittstellen Werkzeugschnittstellen Tool interfaces
Figure 4: Integrated workplace (Bridge) The three main tasks of the Bridge are reflected in the architecture: • Local folder management: For accepting and handing down folders, primary and secondary copies and folder query handling. If a user is logged into his workplace, all folder transfer requests will directly be notified by a requester, otherwise he will be notified asynchronously, e.g. by email. Every folder can be negotiated on with the requesting Bridge by negotiation protocols defined for folders and work places. • Local task management: For every folder transferred to the workplace, a new task is created. The tasks can be refined locally, e.g. by additional control data in the segment "project management", negotiation results, and personal preferences. • Local tool management : Tools built with libraries developed for tool integration, can exchange data and task information with the Bridge. Proprietary tools can be started and stopped from the Bridge. Tool integration [27] has concentrated on data, control and integration dimension. Tighter integration with the process can be reached by more sophisticated construction of process-integrated workplaces [25]. Tools for micro process support were developed by our partners or already
existed at customer sites. So only tool wrapping mechanisms (black box integration) as well as simple data, control and representation mechanisms were used: • Representation integration was achieved by a common style guide and commercially available user interface classes. • Data integration was achieved by the folder approach and the uniform failure data model. Common libraries for accessing the local database at the workplace have been developed. • Control integration, defined as offering and accepting services, consists of a task exchange mechanisms based on statecharts. On the micro process level, tools have to be integrated into the heterogeneous organizational environment. Thus, they access every data resource defined in the Trader system by using the mechanisms implemented in the folder segments. For example, predefined queries [12] are linked to the segment "workplace documents". The query is sent to the query manager and transformed into SQL statements for the selected database based on the repository. The SQL statements are then sent to the connected databases by means of protocols processed by the workflow engine. By assembling the partial answers the query manager constructs the answer table and the workflow engine replicates that table to the local database at the workplace. For example, queries deal with budgets, schedules, deadlines and actual workflow progress information for a folder workflow.
4. Formal Models in the Trader Repository The WFMS architecture described in the previous section relies on shared representations of knowledge encoded in meta models which offer languages to describe workflows and information flows under a common conceptualization. In this section, we describe the main modeling concepts used to organize the information in the trader repository. The core of the formal language (Figure 5) is derived from the process model proposed by McMenamin and Palmer [14]. An input (object) is consumed and manipulated by a system and an output is produced. The system itself consists of processes, describing a workflow, processed by an agent using tools. The formal model, defined in the knowledge representation language Telos [17], has three purposes. (1) Defining the flow of FM processes. (2) Defining the interfaces for intra- and interdepartmental coordination and communication. (3) Providing the basis for implementing or reusing tools in every step of the micro processes as well as for WFMS specification of macro processes. These goals can be mapped on two areas; workflow modeling and information modeling.
contains
contains
Agent
uses
processes
Tool supports
contains produces
Object
Process
contains
consumes
isA
Activity
The elements of the language model are linked by attributes. Thus, the link between spheres of competence can be defined by the objects exchanged, providing a defined interface for task relevant information exchange. The problem of responsibility is solved through linkage of agents with processes. Based on this specification, intraand interdepartmental workflows and information access can be supported by a WFMS. The workflows in one sphere can be described at a sufficient level of abstraction to serve as basis for the specification of tools linked to agents and processes. contains produces
1. The concepts process/activity describe workflows (e.g. capture, analysis and correction of failures) performed by agents. Processes can be refined by the contains attribute. Refined processes are coupled by objects. They can branch (produced objects are used by different processes) and unite (produced object are used by one process). The concept activity is a specialization of the concept process describing atomic processes. Agents can perform these steps autonomously as long as the modeled objects are produced. 2. The concept agent describes humans performing a process. The concept agent can consist of a team of agents, e.g. the quality control team. 3. Tools are technologies supporting the agents performing their tasks. There is a clear distinction between tools and additional resources. Tools can be structured by a contains attribute. 4. Objects are artifacts consumed, manipulated or produced in FM processes. Objects can be decomposed by a contains attribute. The additional concepts event, action, and condition are not shown in Figure 5. External events like a telephone call at a hot-line can start FM processes. FM processes can also start actions outside the system. Preand post conditions are used for situations in which the existence of objects is not sufficient for deciding whether a process should be performed or not.
Content Media
contains
consumes
described by
Workflow modeling means the modeling of FM processes but also the interaction with other processes in the organizational context. Describing the processes assists • a company-wide understanding of certain concepts and workflows, needed for coupling systems • defined interfaces between spheres of competence • identification of agents and exchangeable information in every step In short, the concepts and their relations are as follows:
Process
Object
Figure 5: Workflow modeling concepts
stored
allows
represents
Presentation
Figure 6: Information modeling concepts Information modeling (Figure 6) is the modeling of the information needed at the interfaces in micro cycles and macro processes. Describing the information flows assures • the avoidance of media breaks and information holes in and between spheres • a common data specification at system level • the support of information exchange in a heterogeneous system world • the availability of a complete and as far as needed uniform description of information The concepts added are described as follows: 1. Content is a description of the information /object structure. Especially objects structured by contains attributes (e.g. parts lists) need an overview of the contained information. 2. Media describes the form in which an object is saved. Non-computerized forms of information management can cause special activities like scanning or transforming data. 3. Presentation can (but need not) depend on the media. In particular, computerized forms of information management allow different presentations with available front ends. The usage of presentations depends on the consuming process of that object. Modeling is needed for common understanding and defining interfaces for real information and work exchange between agents. The granularity of modeling the spheres themselves is left to local departments. Even other approaches are possible as long as the modeled information is produced. Therefore, our approach avoids monolithic process models. Communication and
interaction needs are based on well-defined interfaces. Changes on micro process level are handled within the departments. Changes or inventions on macro process level are negotiated between the spheres. The process of modeling was performed in a cooperative manner. Like in the predecessor project WibQus [19, 23], the object base management system ConceptBase [11] was used as conceptual modeling tool and for storage of the repository. Two case studies were performed with industrial partners. The partner engineers trained in using the language worked as mediators with the company people. Every department described their local processes and objects autonomously. The coupling of wo rkflows and information flows was reached by bilateral negotiation processes. The constructed models served as vocabulary with project wide validity and as a starting point for system, interface, and tool specification. To guarantee a uniform data model for FM tools, the project partners in FOQUS developed a common failure data model on the basis of identified objects to be exchanged between spheres. Four main modeling areas for FM have been identified: Failure description (attributes of capturing failures), failure context (products, processes, and machines, typically transferred from organizational data sources), failure analysis (symptoms, reasons, and measures), and failure classification. With the models created in the cooperative process a first step to a flexible and distributed FM system is reached.
5. Summary and Outlook In this paper we have introduced fundamental concepts and technologies to support the workflows and information flows for flexible, distributed FM in a federated organizational context, namely • the escalation principle, to preliminarily structure the distributed FM processes by systematically identifying spheres of competence and assembling them in a flexible manner, • the modeling language for FM processes which enables a sufficient degree of common understanding of workflows and information flows in distributed FM processes • defined interfaces at system level and support for system and tool development, • the electronic circulation folder approach to integrate workflows and information flows, • the workflow management concepts based on the defined models and folder approach, and • the system architecture to support distributed FM processes in organizations which depend on federated information systems. Most of the more than 350 enterprises surveyed in FOQUS are planning for FM systems in this decade, but
so far little support is given by software companies. Therefore, our approach was regarded with great interest at the workshops performed in the project. Most of the system architecture has been implemented and shown in an integrated demonstration at the final project review. The many constructive comments and suggestions made by the participants will be included in further specification and implementation tasks. In the context of FOQUS, the special problems of variant rich and innovative productions which were not discussed in the paper have also been tackled. Our deliberations have shown that available WFMS generally are not suitable for this problem context. Thus, more sophisticated methods of information and workflow management are necessary for the continuous maintenance of knowledge in variantrich and innovative product or service life cycles. Acknowledgements: This work was supported by the German Federal Ministry of Education, Science, Research and Technology (BMBF) under grant 02 PV 710 25 and by the German Research Society's Graduate College 'Computer Science and Technology' at RWTH Aachen. The authors wish to thank our colleagues Manfred Jeusfeld and Peter Szczurko for their many fruitful comments and discussions. For the implementation of the FOQUS prototype we thank our students Marco Essmajor, Nico Hamacher, Gregor Lietz, and Axel Stolz.
References [1] M. Amberg, "Modeling Adaptive Workflows in Distributed Environments", 1 st Int. Conf. on Practical Aspects of Knowledge Mangement, Basel, Switzerland, Oct., 1996. [2] M. Amberg, "The Benefits of Business Process Modeling for Specifying Workflow -oriented Application-systems", WfMC Handbook, Workflow Management Coalition, 1997. [3] F. Burger and R. Reich, "Usability of Groupware Products for Supporting Publishing Workflows", G. Chroust, A. Benczur (eds.), Proc. CON '94, Workflow Management: Challenges, Paradigms and Products, (Linz, Austria), 1994, pp. 51-63. [4] R. Daft and R. Lengel, "Organizational Information Requirements, Media Richness and Structural Design", Management Science, 32(5), 1986, pp. 554-571. [5] R. Desatnik, "Long live the king", Quality Progress, 22(4), 1989, pp. 24-26. [6] A. V. Feigenbaum, Total Quality Control, McGraw-Hill, New York, 1991. [7] H. Förster, P. Klonaris, T. Pfeifer, G. Warnecke, "Der Regelkreis ist noch nicht geschlossen", Qualität und Zuverlässigkeit, 41(10), 1996, pp. 1128-1132 (in German). [8] V. Gruhn, "Business Process Modeling and Workflow Management", International Journal of Cooperative Information Systems, 4(2), 1995, pp. 145-164.
[9] M. Hammer, "Reengineering Work: Don’t Automate, Obliterate", Harvard Business Review, (July/August) 1990, pp. 104-122.
[23] T. Pfeifer (ed.), Wissensbasierte Systeme in der Qualitätssicherung - Methoden zur Nutzung verteilten Wissens, Springer-Verlag, Berlin, 1996 (in German).
[10] D. Harel, "STATECHARTS: A Visual Formalism for Complex Systems", Science of Computer Programming 8, 1987, pp. 231-274.
[24] T. Pfeifer, (ed.), Fehlermanagement mit objekt-orientierten Technologien in der qualitätsorientierten Produktion, Forschungszentrum Karlsruhe Technik und Umweld, FZKAPFT 183, March 1997 (in German).
[11] M. Jarke, R. Gallersdörfer, M. Jeusfeld, M. Staudt, and S. Eherer, "ConceptBase - a Deductive Object Base for Meta Data Management", Journal of Intelligent Information Systems, 4(2), 1995, pp. 157-192. [12] M. Jarke, M. Jeusfeld, P. Peters "Model-Driven Design and Operation of Cooperative Information Systems", in M.Papazoglou/ G.Schlageter (eds.): Advances in Cooperative Information Systems , Academic Press, 1997, pp. 263-290. [13] B. Karbe and N. Ramsperger, "Concepts and Implementation of Migrating Office Processes", In W. Brauer and D. Hernandez (eds.): Verteilte künstliche Intelligenz und kooperatives Arbeiten, Springer-Verlag 1991, pp. 136-147. [14] S.M. McMenamin and J.F. Palmer, Essential System Analysis, Prentice-Hall, Englewood Cliffs, 1984. [15] R. Medina-Mora, T. Winograd, R. Flores, C.F. Flores. "The action workflow perspective to workflow management technology", Proc. 4th CSCW Conf., Toronto 1992, pp. 281-288. [16] S. Morschheuser, H. Raufer, and C. Wargitsch, "Challenges and Solutions of Document and Workflow Management in a Manufacturing Enterprise: A Case Study", Proc. of the 29th Hawaii Intl. Conf. on System Sciences (HICSS-29), 1996, vol. V, pp. 4-13. [17] J. Mylopoulos, A. Borgida, M. Jarke, and M. Koubarakis, "Telos - a Language for Representing Knowledge about Information Systems", ACM Transactions on Information Systems, 8(4), 1990, pp. 325-362. [18] A. Oberweis, R. Schaetzle, W. Stucky, W. Weitz, and G. Zimmermann, "INCOME/WF - A Petri Net Based Approach to Workflow Management", In Krallmann, H. (ed.): Proc. of Wirtschaftinformatik’97, Physica-Verlag, 1997, pp. 557-580. [19] P. Peters and P. Szczurko, "Integrating Models of Quality Management Methods by an Object-Oriented Repository", 2nd Biennial European Joint Conf. on Engineering Systems Design and Analysis, London 1994. [20] P. Peters, U. Löb, and A. Rodriguez-Pardo, "Structuring Information Flow in Quality Management", Proceedings. of the Fourth Intl. Conference on Data and Knowledge Systems for Manufacturing and Engineering, Hong Kong, 1994. [21] P. Peters, Planning and Analysis of Information Flow in Quality Management, Dissertation, RWTH Aachen, 1996. [22] T. Pfeifer, Quality Management, Carl Hanser Verlag, Muenchen, 1993 (in German).
[25] K. Pohl, R. Klamma, K. Weidenhaupt, R. Dömges, P. Haumer, and M. Jarke, "A Framework for Process-Integrated Tools", Aachener Informatik -Berichte, 96-13, Aachen 1996. [26] T. Schäl. Workflow Management Systems for Process Organizations. LNCS 1096 (Diss. RWTH Aachen 1995), Springer-Verlag, 1996. [27] A. I. Wasserman, "Tool Integration in Software Engineering Environments", In F. Long (Ed.), Proceedings of the Intl. Workshop on Software Engineering Environments, Springer-Verlag, Berlin, Germany, 1990, pp. 137-149. [28] H.-J. Warnecke, The Fractal Factory, Springer-Verlag, Berlin, Germany, 1992 (in German). [29] M. Wersch, Workflow Management - Systemgestützte Steuerung von Geschäftsprozessen, Deutscher Universitäts Verlag, Wiesbaden, Germany, 1995 (in German).