A Process-oriented Backend for Data-driven

3 downloads 0 Views 721KB Size Report
May 7, 2018 - automation pyramid (see ISA-95 standard). Since then many approaches have addressed the issue of integration. Consequently, different ...
A Process-oriented Backend for Data-driven Production Planning and Control Research Report Date published May 7th 2018

Selim Erol Institute of Management Science, Theresianumgasse 27, 1040 Vienna, Austria

Summary Process management in industrial production faces ongoing challenges, which can be categorized in market-driven, society-driven and technology-driven challenges. Recent examples of such challenges are an increasing need for individualized products at low prices, an increasing need for “greener” and more digitalized production processes. These external challenges impose a certain increase in complexity of process management. To reduce complexity of management tasks production managers of the future need tool support both at design-time (planning) and at run-time (execution) of a production process. Digitalization in this regard, can be a solution to the problem as an increasing availability of data opens new ways to exploit this data for improved (more accurate) production planning and control. However, large amounts of data collected need a semantic frame of reference to be able to be analyzed ex-post or in real-time in a meaningful way. A possible solution approach to this problem are process engines. Process engines (or workflow engines) keep track of the status of a process and provide an application and implementation independent reference for data produced during operations. For data produced by physical agents a process context is provided which can be used to facilitate ex-post or real-time analytics for improved production planning and control tasks, e.g. process scheduling. In this research report, I report on the advances in developing and implementing such an engine for production planning and control.

Contents Summary .............................................................................................................................3 1

Introduction ..................................................................................................................5

2

Data-driven process-oriented Production Planning and Control ........................................6

3

2.1

Problem statement ................................................................................................6

2.2

Related research ....................................................................................................7

2.3

Solution approach ............................................................................................... 10

Weasel Process Engine ................................................................................................. 13 3.1

Basic requirements .............................................................................................. 13

3.2

Software Architecture .......................................................................................... 14

3.2.1

Software components ...................................................................................... 15

3.2.2

Data model ...................................................................................................... 16

3.3

4

Implementation ................................................................................................... 17

3.3.1

Weasel Order Manager .................................................................................... 17

3.3.2

Weasel Scheduler............................................................................................. 18

3.3.3

Weasel Process Designer .................................................................................. 21

3.3.4

Weasel Instance Monitor .................................................................................. 22

Literature .................................................................................................................... 23

1 Introduction Process management in industrial production faces ongoing challenges, which can be categorized in market-driven, society-driven and technology-driven challenges. Recent examples of such challenges are an increasing need for individualized products at low prices, an increasing need for “greener” and more digitalized production processes. These external challenges impose a certain increase in complexity of process management. To reduce complexity of management tasks production managers of the future need tool support both at design-time (planning) and at run-time (execution) of a production process. Digitalization in this regard, can be a solution to the problem as an increasing availability of data opens new ways to exploit this data for improved (more accurate) production planning and control. However, the mere technological capability to obtain raw data from production processes (f.e. from sensors and machines) does not necessarily lead to a better performance. A look into the industrial practice shows that industrial firms employ data loggers of all sorts for various purposes in their production environment, but are striving to relate real-time raw data from sensors and machines to production orders and – one level higher - customer orders to be able to identify gaps between planning and execution both in time and over time. Furthermore, changes in production processes and their effects on operational performance are not consistently tracked as a frame of reference on a process level does not exist. In this research report, I propose an approach, grounded in business information systems research, for data-driven production process planning and control. The report is structured as follows – in section 2, I explain the motivational background for process engines as the backbone for future production planning and control. In section 3, I describe the principal architecture of the process engine and its features with regard to data-driven production planning and control. In contrast to traditional publication formats, I use a language that is reduced in the sense that sentences are short and Latin terms are mostly avoided. I also try to use a narrative style rather than an abstract theory driven style of writing. I use natural language and graphics instead of artificial (mathematical, programming) language. I write in first person as I have solely conducted the research reported on here and I have written this entire report by myself. I have clearly indicated, where others have contributed.

2 Data-driven process-oriented Production Planning and Control 2.1 Problem statement Production planning in the traditional sense comprises all those activities to prepare optimally a production department for a future demand situation. Typically, a production department distinguishes between long-term planning and short-term planning. Long-term planning aims at providing information for respective investments in machines, equipment and personnel. While long-term planning follows the same logic across industry sectors and company sizes, short term planning depends strongly on the specific characteristics of an industry sector. E.g. discrete goods versus continuous goods, job production versus batch and flow production, “make-to-stock” versus “make-to-order”. Short-term planning in general aims at scheduling concrete production orders resulting from customer orders (“make-to-order”) or production programs (“make-to-stock”). Scheduling of production orders takes into account the given resources (machines, personnel) and tries to optimize the production process according to operational objectives such as (e.g. machine utilization, stocks, throughput-time, delivery adherence). While machines are assumed to be fixed, the amount of personnel resources can be adapted to short-term requirements to a certain degree (e.g. through timely or spatial shifting of work force). Both long-term planning and short-term planning are necessary for ongoing control (in the sense of control theory) of a production process. While planning provides information on the “to-be” state of a process, a continuous monitoring of the “as-is” state of the process enables a production manager to identify gaps and take respective actions. In a flow production and “make-to stock” scenario with standardized products the production process is well predictable and deviations are exceptional. In a job production and “make-to-order” scenario, the production process is in general not well predictable regarding time and costs. In this scenario, exceptions are the rule. Monitoring of process state (order progress) in such a scenarios is difficult and dependent of manual, although software-supported, feedback of workers. To increase accuracy, feedback on the state of a production process are often complemented with automatic feedback devices (soft- and hardware supported, e.g. sensors or cameras). However, in both scenarios continuous collection of data on the actual state of the overall process is essential, as data is the foundation for continuous process improvement. Today’s technical equipment (sensors, applications, operating systems), is well capable of delivering large quantities of data. These large quantities (aka “big data”) must be stored and processed in

an effective manner. In other words, raw data needs a stable frame of reference to be able to link data across different technical systems, data sources and over (periods of) time in a meaningful way. This is where process models come into play. Process models are abstract descriptions of real production processes. In essence, a process model names the activities and their logical relations with each other independent of a concrete process implementation. State-of-the-art process modeling languages (for example BPMN) provide means to describe processes from different perspectives such as activity flow, material flow and information flow. Thus, process models have the potential to serve as a frame of reference for raw data in production environments.Figure 2 shows an exemplary process model, the instances of the process model and related data objects. The process model is independent of the technical and organizational implementation. Changes in the technical infrastructure and the organizational structure do not affect the process model, which ideally stays the same over a longer period. All data collected over time from different sources and systems refers to respective activities (defined by the process model). Consequently, consistent combination and analysis of such data, information generation, knowledge building and decision making over time is potentially possible.

2.2 Related research The idea to improve effectiveness (e.g. accuracy) of production planning and control through the support of computers and a largely digitalized information flow across the different levels of the automation pyramid is not new. In fact, the early concept of Computer Integrated Manufacturing (CIM) (Gaylord, 1987; Scheer, 1987) points in this direction as early as in the late 1980s. However, at this time the focus was not so much on how to generate and make effectively use of data but on how to integrate different technical systems across the different levels of the automation pyramid (see ISA-95 standard). Since then many approaches have addressed the issue of integration. Consequently, different technical solutions have been developed reaching from holistic system architectures (see for example (Beeckman, 1989; Pritschow et al., 2001; Lee, Bagheri, & Kao, 2015; Gröger et al., 2016)) to interface standards (e.g. OPC UA) and technologies (see for example (Lu, Morris, & Frechette, 2016)) and ready-to-use industrial solutions, so called Supervisory Control And Data Acquisition (SCADA) systems or Manufacturing Execution Systems (MES), from market leading companies like Siemens, SAP or IBM. Recent research has addressed the challenge of collecting and exploiting large amounts of real-time data for improved (“intelligent”) manufacturing in general(Nagorny, Lima-Monteiro, Barata, & Colombo, 2017; O’Donovan, Leahy, Bruton, & O’Sullivan, 2015). Developments in computer science (e.g. artificial intelligence and data analytics) and the availability of low price hardware to collect large amounts data (e.g. sensors) has fueled as well applied research towards

smart production systems powered by “real-time” and “big” data. I intentionally wrote the terms “intelligent”, “real-time” and “big” under quotation marks because no scientifically agreed upon definitions exist for these attributes. Rather, these attributes are context-specific and therefore require a context-specific definition. Nagorny and others(Nagorny et al., 2017) have conducted a comprehensive and recent review of research and practice in the field of big and real-time data analysis for manufacturing. In essence, their review shows that various architectural (e.g. RAMI 4.0 (ZVEI, 2015)) concepts, methodological concepts (e.g. CRISP(Wirth, 2000)) and technological concepts (e.g. TCP/IP, HTTP, MTConnect, OPC UA) exist to date to enable the necessary infrastructure to fluently collect and process data from the shop floor and from other sources. However, the authors point to the fact that a particular challenge arises in choosing an appropriate resolution for data acquisition (e.g. granularity of time stamp), choosing appropriate reduction techniques and choosing (semantic) structures for subsequent data analysis. The latter finding has been a major driver for the research conducted by Krumeich and others (Krumeich, Jacobi, Werth, & Loos, 2014). The authors propose a Complex Event Processing(Cugola & Margara, 2012) (CEP) based approach that aims at handling data from multiple sources (events), and relate these data to their process context. CEP is an approach to reduce the amount of data by detecting patterns in raw data (e.g. temperature measurements of a machine over time) and produce so called complex events according to some rules (e.g. temperature higher than 90°C over a time period of 5 minutes). These events then can be used by some application logic to predict a certain behavior (consecutive events). CEP has been developed in the context of distributed computer systems and recently gained momentum in the business process management domain where the complex event processing approach is combined with business process modeling and data mining approaches (Barros, Decker, & Grosskopf, 2007; Bülow et al., 2013). Krumeich and his colleagues do not provide a concrete solution for this problem but provide a general conceptual architecture for a CEP based and a possible research direction. Further research by Krumeich and his colleagues led to a conceptual architecture for predictive process planning and control. The architecture proposed is based on a physical layer (L0) that produces data, a layer (L1) that efficiently handles (streams and stores) the data, a layer that analyses and aggregates the data to be used in the next layer (L3). L3 is the user interface to human decision makers through dashboards, alerts and recommendations. In addition, non-human decision makers can be involved that autonomously choose a course of action based on some algorithm. A major pillar in their work is the introduction of process models as the frame of reference for data analytics (Krumeich, Werth, Loos, Schimmelpfennig, & Jacobi, 2014) An approach presented by Zhang and colleagues (Zhang, Wang, Du, Qian, & Yang, 2018) is making use of Petri-Net based process models to manage sensors and their data streams in a manufacturing environment. Their proposed architecture introduces an additional layer that

deals with sensors and their data in a standardized yet flexible way. Based on Colored Petri-Nets (CPN) notation, process models to control sensors behavior and their data streams can be defined. Different process models can be defined for different types of sensors. Sensors need to register before they are able to deliver data packets. Data packets are processed by a run-time instance of a CPN based model and are subsequently delivered to the application service layer. Application services is for example a job scheduling application. Da Silva and colleagues (da Silva, Junqueira, Filho, & Miyagi, 2016) propose an architecture and workflow based control system for manufacturing. Their proposed systems architecture is based on the idea of holonic manufacturing systems where holons are autonomous subsystems of a manufacturing system. A process model is then used to describe and control the interplay between autonomous holons. The authors have implemented this architecture through a Java based development framework where a workflow engine manages instances of process models. To model production control flow (process models) an extended Petri-Net notation is used. Events resulting from sensors and other data sources are used to control the process. Workflow Management Coalition has proposed architectures for workflow management and workflow engines as early as 1994(Jablonski, 1997). The original idea behind a workflow engine is that it manages the overall state of a process independent of its implementation. Process models define the flow of activities within a process and the resources needed. While process models define the process on a type level, instances describe the actual state of the process. Instances of workflow models (or process models) are initiated by a specific case, for example a production order and serve as a reference to monitor the overall state of a process and the related data. While a quite many of reference architectures have been proposed only few reference implementations exist to date. Examples of publicly or commercially available business process engines are Activiti (Rademakers, 2012), SAP BPE(SAP AG, 2018), IBM BPM(IBM Corp., 2018). The concept of workflow engines or business process engines as their modern successors has not yet found its way into production management applications to a full extent. An early example of research in this direction is CIMFlow, a workflow engine proposed by Luo and colleagues(Luo & Fan, 1999). CIMFlow mainly follows the architecture proposed by WfMC. The workflow engine is used to maintain state of workflow instances, schedule activities, generate worklist for each participant, and maintain the log files. In their concept, workflow engines can communicate with other workflow engines via a special language. Their approach is mainly focused on controlling a production processes through manual interactions and automatic allocation of resources. Integration with execution level (“shop floor”) and processing of large amounts of data to support planning has not been considered in their concept. The integration issue is partly addressed through work from Zhang and others (Zhang, Qu, Ho, & Huang, 2011). They propose a so-called smart gateway. This gateway is used as middleware to provide standardized interfaces to physical production resources (for example machines, materials). Through

these interfaces, physical resources are controlled and data is collected. The Smart Gateway is also used for managing production processes through a workflow execution component. Each instance of a process can be traced with regard to its status and its history. The research summarized in prior paragraphs reveals that a certain importance is given to tight integration of data and processes for more adaptive production processes. While some researchers put more emphasis on developing generally applicable architectures, others focus on particular parts of such an architecture, for example process control concepts or data analytics concepts. In our research we want address the problem of relating raw data to production processes for later use in decision making and planning. In a prior conceptual architecture,(Erol & Sihn, 2017) we have proposed a system architecture that is based on the idea of process models to define behavior of a production system across the different levels of a production system as defined by ISA-95. In this paper, we will refine this concept with regard to production data integration.

2.3 Solution approach

Figure 1: Concept of linking (raw) data to process model to conserve their context of generation

Figure 1 illustrates the basic idea of linking data from various sources from the physical process (“shop floor”) to virtual representations of a process. By virtual representations of a process, I mean formal descriptions of a process – also known as process models. Process models describe the behavior of a production system. In Figure 1 a process is shown together with the sub-processes. Such processes can have a multi-level structure, where the activities on the lowest lowest

level represent the tasks executed by a real-world agent (a machine or human operator). The advantage of linking data sources to process models is that also data various sources are linked Process Type A2

A5

A1

A4

E1

E2 A6

A3

Process Instance A2'

A1'

A4'

A6'

A3'

Data

D1'

D2'

D3'

D4'

D6.1'

D6.2'

Agents (Physical / Execution Level)

P2' P1'

P4'

P6'

P3'

Figure 2: Detailed concept of linking (raw) data to process model to conserve their context of generation

to each other through their execution context. As process models are described through graphbased formal or semi-formal languages, relations between datasets are evident beforehand and do not need to be detected after their emergence. A multi-level process model has the advantage that data that applies to a larger set of activities or cannot be related to a particular task at the time of emergence can be related to higher level activities. Later, such data can be broken down to particular tasks to investigate possible dependencies (f.e. a relation between room temperature and energy consumption of a machine). In the approach presented here, instances of a process model are used as an envelope for data that is generated on the shop floor. Instances form an intermediate layer between a process model and the execution related data of a process. Thus, data is always semantically enriched with information on the process execution, f.e. the task that was executed, and the date and time of the task. Through the process model, relations between data emerging from different tasks can be established at any time. Figure 2show a process model with activities A1, A2, ..A6

and an instance of the process model which reflects the path actually taken to produce a product. The process instance includes activities A1’, A2’, A3’, A4’, A6’. Agents P1’,…P6’ execute the process and produce data objects D1’,…D6’. Data objects will be annotated with information on the process instance within they were produced.

3 Weasel Process Engine In this section, I describe the architectural concept behind the Weasel Process Engine as a possible solution approach to data integration for improved production planning and control.

3.1 Basic requirements Process models are abstract descriptions of a physical production process. Depending on the language chosen to describe a process, models include the flow of activities, the flow of information, the flow of materials and the agents that perform the activities. While a process model is a holistic description of a process including possible variations, an instance or a process model describes the actual flow of activities with start and end times, the actual materials and information objects used. Requirement 1: A process engine must be able to create instances of a process model. Instances must be able to store actual operational data on the execution of a process. The minimum data required is timely data (start and end times) of each activity involved in a process, the material objects (materials type and quantity) and the information objects involved, the agents (type and quantity) involved. For a continuous tracing of process performance, a production manager must be able to trace back process instances to their original process model. In other words, process instances must be comparable against each other through their original process model. Thus, process models must keep track of their changes through some kind of version management. Requirement 2: Process instance must be back traceable to their original process model version. Changes to a process model must be documented and for each process instance, a valid version of a process model must be maintained. At any time during operation a production manager must be able to monitor the actual state of a process instance. The actual state of a process instance can be described by predefined operational indicators or by linked data. A production manager must be able to analyze such linked data in an explorative yet unambiguous way. Linked data that is generated by the agents that execute the process. For example, the energy consumed to perform a certain activity. Such data

is often generated by some external1 sensors or other devices, f.e. a temperature sensor that is attached to a machine. Requirement 3: Each process instance must have a unique and stable identity over time. Data that is generated in the course of a process by a machine or a human agent must unambiguously be linkable to a process instance In some scenarios, e.g. “make-to-order” a production manager must be able to plan and schedule process instances for later execution. For this purpose the actual capacity situation of the agents must be evaluated. This evaluation has to be done through an analysis of process instances in progress and allocated resources. Process instances must then be assigned a planned start and end time, an agent (or set of agents) that performs the process, a set of materials that are needed beforehand. Requirement 4: Process instances must be able to be scheduled in advance. Scheduling must be performed to satisfy external and internal constraints. Each process instance must be able to hold different states. Process instances and linked data are essential to evaluate the state of a process and are a basic requirement for “good” managerial decisions. The data collected from a process instance must be accessible and must be navigable through a human readable and interactable interface. Requirement 5: it must be possible to analyze and evaluate process instances and their actual state at any point in time (at real-time) through an adequate user interface.

3.2 Software Architecture Based on the requirements outlined in section 3.1 I will now describe the principal architecture of the process engine I developed, I called it the Weasel Process Engine (WPE) because it is based on an open source application development framework called Weasel 2. Weasel has been developed in the course of former research in the field of order management and has been succes-

1 2

External related to the agent performing a process https://launchpad.net/weasel2

sively extended with features like process modeling. It is based on PHP and Apache as the programing and execution environment for the business logic on the server side and HTML/JavaScript for user interface logic on the client side.

3.2.1 Software components WPE is built on top of the Weasel core. Weasel core comprises a user and permission management component and an easily extendable data application development and data management component. On top of these core features several application components (apps) exist. For example an order management component, a materials management component and a content

OrderManager

ProcessDesginer

PerformanceMonitor

InstanceMonitor

TaskList

ProcessModel Repository

ProcessEngine

OrderRepository

ProcessScheduler

ProcessInstance Repository

Figure 3: Main software components of WPE

management component. The WPE has been designed with the idea that a central process engine keeps track of all business processes in a company on the information management level. In Error! Reference source not found. the main components of WPE have been sketched. To design a Process Designer component has been developed with which a process can be modeled and formally described with BPMN. The Process Designer component supports collaborative development of such process models through a wiki-based approach(Erol, 2016). Process models are subsequently stored in a Process Model Repository. Process models are instantiated by some kind of trigger. In the case of production management, the release of production order triggers production activities. For this purpose, the order management component is tightly integrated with the Process Engine. Process Instances are indexed and stored in a repository as well from which they can be accessed through the process engine during operation. The Process Engine serves as the central management component for invocation of production activities based on a production order. The Process Scheduler – in case a process follows a push principle – computes capacity loads and distributes activities across agents. In case of human agents, a user interface displays a Task List. The Task List enables a worker on the shop floor to

keep track of activities, to feed back completion of tasks and see which activities have to be performed next. All instances can be monitored and managed in real-time through the Instance Monitor. Performance can be evaluated for an arbitrary period through the Performance Monitor.

3.2.2 Data model In Error! Reference source not found. a class diagram shows the main object types of the process engine. In WPE all process models are stored as activity type objects. Activity types – as any other object type in WPE – has a unique id, a title, several other meta-data like the date it was created, the date it was last modified, and keywords to describe the activity type. Each activity type includes as well a control flow attribute, which contains references to other activity types and their flow logic. In WPE each activity type may refer to arbitrary other (sub-) activities. Thus, deep process structures can be modeled and instantiated which can comprise arbitrary levels. Activity Type Outputs (ws_activitytypes_outputs) 0

-activitytype_id -id

n

Product Type (ws_producttypes)

Production Order (ws_porders) 1

-id

-title -unit

1

1

1

1

-id 1

0 n

-title -producttype_id -quantity -date_to_deliver -planned_start_date -planned_finish_date -actual_start_date -actual_finish_date

-output_id -quantity -unit

Activity Type (ws_activitytypes) 0

1

1

1

-id

-title -control_flow -parent_id

Activity Type Inputs

Machine Type (ws_machines_types)

(ws_activitytypes_inputs)

0 0

n

-activitytype_id -id

n

-input_id -quantity -unit

1

Product Type (ws_producttypes)

-id 1

0

Person Type /Role (ws_persontypes)

1

-title

1

-id

-id

-title

-title

n

Activity (ws_activitys) 1

-id 0 n

0 1

-title -activitytype_id -planned_start_date -planned_finish_date -actual_start_date -actual_finish_date -reference_id -machine_id -status

1

1

1

Activity Inputs (ws_activitys_inputs) 0 n

-activity_id -id

Machine (ws_machines)

0

-input_id -quantity -unit

n

Product /Material (ws_products)

-id 1

-title -machinetype_id

0 n

-id

-id

-title -producttype_id

-title -persontype_id -name

Product /Material (ws_products)

Activity Outputs (ws_activitys_outputs) 1

0 n

Person (ws_person)

1

-activity_id -id -output_id -quantity -unit

-id 1 0

-title -producttype_id

n

Figure 4: Classes of main objects in the WPE

Each activity type can have arbitrary input- and output types. In the current implementation, input types can be machine types, person types, and material types. Instances of an activity type have attributes like planned and actual dates, overall state and a reference to a trigger object like a production order. This reference id allows tracing back all activities to a specific production order. Instances of an activity are assigned to concrete machines or persons, which are instances of their respective types. Also material types specified as input type for an activity type can be instantiated. This approach is useful in case a complete material tracking and tracing is required on an object level. For example in aeronautic industries, most parts of an airplane must be back

trackable in terms of their genealogy. In other cases, materials are managed on a type level. This is for example the case in industries with continuous goods where material quantities are recorded only with their physical quantities and not on the level of individual objects.

3.3 Implementation In this section, I describe the different components in detail.

3.3.1 Weasel Order Manager

Figure 5: Screenshot from Order Entry Form

Error! Reference source not found. shows the form that holds the data from the production order. The production order can be generated from a customer order manually or automatically. In It holds data that are essential for production. The date the order was created, the reference to a customer order (contract) if applicable. The type of product to be produced, the desired date to be delivered from a customer or sales point of view, and planned start date and finish date from a production department view. It holds as well the quantity (lot size) to be produced of the same good and the units of measurement. For continuous goods, different units will be used than pieces. The production order can be in preparation – status “open”, released for production, but not already started – status “released”, in progress – status “in progress”, and finally finished – status “finished”. As can be seen by the class diagram in Error! Reference source not found. Error! Reference source not found.a production order is linked to an activity type (process model) through the

product type. For each product type, a process model is assumed to exist that describes the steps to produce it. When a production order is released (see button in upper part of screenshot) WPE searches for a process model (activity type) that has the respective product type as output type. Given that such an activity type exists an activity instance is created, sub-activities are instantiated as well and are scheduled and allocated according to a simple forward scheduling algorithm. The scheduling algorithm takes into account the assigned input types (machines and personnel) and allocates free capacities. As a result, each activity instance has a planned start and finish date, and a machine, workplace or person assigned.

3.3.2 Weasel Scheduler For implementing the Scheduler component, I programmed a “simple” forward scheduling algorithm. When a production order is released, the Scheduler searches for a related activity type (process model) and determines the sequence of activities to be performed and the type of agents (machines, persons, workplaces) to be used for activities. The Scheduler then searches available agents to fulfil each activity in the given order and as soon as possible. “As soon as possible” in this context means that new activities are always scheduled after already planned activities on a given agent. The scheduling algorithm processes the following main steps in the given order: 1. For a given production order, identify product and quantity to be produced. 2. For identified product type, find activity type with adequate output type. 3. For activity type, get sequence of sub-activities (control flow property) to be scheduled. 4. For each sub-activity type, get agent type needed 5. For each agent type determine earliest available agent (instances of agent type) based on latest finish dates of already planned activities 6. For each sub-activity type, create activity (instance of activity type) 7. For each sub-activity, determine earliest start date based on earliest available agent (see step 5)

Error! Reference source not found. shows a screenshot of simplified productions schedule for production of a wooden bed. The production process consist of five activities, which are partially performed in parallel (activity 1 and 2) and partially performed in sequence. Activities belonging to one production order are distinguished from other production orders through different coloring. Colors are chosen on a random basis. Each activity in the schedule is tagged with its title, ID and information on the production order it belongs to. In Error! Reference source not found. a very simple situation is illustrated where production orders for the same product are shown. Therefore, all activities are tightly scheduled. In a more realistic and complex scenarios with different product variants several gaps between activities would occur.

Figure 6: Screenshot of Scheduler component

The Scheduler can be configured for different time scales, as different production processes need different granularity of time. For example, production planning of complex machinery will require a granularity of weeks versus production of a car is usually performed within days or hours1. The basic configuration of granularity determines the time unit of throughput times (processing times) maintained for activity types. However, granularity of the Schedule visualization can be changed in dependently of the basic configuration. In a recent research effort, the Scheduler, which is based on a simple forward scheduling algorithm, has been extended with a genetic algorithm. The genetic algorithm is able to compute solutions for a given set of activities, a set of agents and a given period of time. In contrast to simple forward scheduling the genetic approach does not assume already planned activities as given but as well assumes them changeable. In addition, it tries to find the best solution out of a set of mutated solutions for a given problem. As a genetic approach needs a lot of computational resources for larger problems we2 have chosen a different architecture. The algorithm was

1 2

exemplary times do not consider manufacturing times of externally procured parts Alexander Reiling did the programming and a lot of the main brain work, I had the idea and supervised his work.

encoded in C language instead of PHP and subsequently implemented as a PHP module which can be used by a PHP script.

Figure 7: Screenshot of a schedule as computed by the genetic algorithm extension

The Scheduler component is designed to be used by any application that needs a scheduling service. For example, a service or maintenance management application can make use of the scheduling service component through a simple HTTP1 based communication protocol.

1

HTTP … Hyper Text Transfer Protocol

3.3.3 Weasel Process Designer The Process Designer component has been developed in order to facilitate collaborative ideation and design of a production process before it is implemented in an organization. I programmed the graphical user interface (Graphel) by means of HTML, SVG and JavaScript. Thus, it can be integrated easily into any application requiring a process design component. Graphel supports a process designer in modeling a production process with activity flow, information flow and material flow. Graphel has been designed primarily to support the symbol set and semantic of BPMN. In the context of WPE, Graphel is embedded in the content management component of Weasel and therefore allows as well consistent versioning of process models. For a detailed description of the version management of the ProcessDesigner component see a previous publication(Erol, 2017).

Figure 8: Process Model Designer showing a production process model for a wooden bed

Process models designed with the Process Designer component can be subsequently enacted as an activity type in WPE. When enacting a process model several requirements must be fulfilled. First, a process model must have clearly defined output types (f.e. product types). Second, alternative paths in a process model are allowed but are transformed into different activity types with different output types. The basic assumption behind this approach is that a variation in a production process always affects the type of product. In other words, products of the same type are always produced in the same way and if an alternative path has no effect on the process it is merged. For example, if a process model contains alternative paths for the type of finishing a product (f.e. painting, laminating) the different paths will be merged to one as long as the product type is the same (holds the same product id).

In the current implementation of the Process Designer only simple sequences of activities are supported. Events are omitted when transforming a BPMN process model to an activity type.

3.3.4 Weasel Instance Monitor Activity instances created from activity types (process models) are stored in Weasel’s database along with their status and start/stop times. In addition, each activity instance holds a reference to the activity type it is derived from. Depending on the production scenario chosen also instances of input- and output types are created with each activity instance. For example, if products created and materials consumed during production need to be tracked on an instance level throughout the whole production process, respective instances of product and material types are created. Following the approach described in section 2, these instances are the primary reference for all data produced by production equipment on the shop floor. Linking shop floor data to activity instances makes its use for improved production planning and control easier, as the context in which the data has been produced is evident. Monitoring of activity instances -- in the current implementation -- is performed by a simple list of activities and their status. Each activity can also actively be changed with regard to status and time information. In the case of an operational production line a large amount of activity instances and related entries in the database will occur. Monitoring of individual instances will then be quite difficult. For this purpose, an aggregated view on activities and other more sophisticated tools must be implemented on top of the simple list currently provided.

4 Literature Barros, A., Decker, G., & Grosskopf, A. (2007). Complex Events in Business Processes. In Business

Information

Systems

(pp.

29–40).

Springer,

Berlin,

Heidelberg.

https://doi.org/10.1007/978-3-540-72035-5_3 Beeckman, D. (1989). CIM-OSA: computer integrated manufacturing—open system architecture.

International

Journal

of

Computer

Integrated

Manufacturing ,

2(2),

94–105.

https://doi.org/10.1080/09511928908944387 Bülow, S., Backmann, M., Herzberg, N., Hille, T., Meyer, A., Ulm, B., … Weske, M. (2013). Monitoring of Business Processes with Complex Event Processing. In Business Process Man-

agement Workshops (pp. 277–290). Springer, Cham. https://doi.org/10.1007/978-3319-06257-0_22 Cugola, G., & Margara, A. (2012). Processing Flows of Information: From Data Stream to Complex Event

Processing.

ACM

Comput.

Surv.,

44(3),

15:1–15:62.

https://doi.org/10.1145/2187671.2187677 da Silva, R. M., Junqueira, F., Filho, D. J. S., & Miyagi, P. E. (2016). Control architecture and design method of reconfigurable manufacturing systems. Control Engineering Practice, 49, 87– 100. https://doi.org/10.1016/j.conengprac.2016.01.009 Erol, S. (2016). Collaborative Modeling of Manufacturing Processes – a Wiki – Based Approach. In

Cooperative Design, Visualization, and Engineering (pp. 17–24). Springer, Cham. https://doi.org/10.1007/978-3-319-46771-9_3 Erol, S. (2017). Recalling the rationale of change from process model revision comparison – A change-pattern

based

approach.

Computers

in

Industry,

87,

52–67.

https://doi.org/10.1016/j.compind.2017.02.003 Erol, S., & Sihn, W. (2017). Intelligent Production Planning and Control in the Cloud – towards a Scalable

Software

Architecture.

Procedia

https://doi.org/10.1016/j.procir.2017.01.003

CIRP,

62,

571–576.

Gaylord, J. (1987). Factory Information Systems: Design and Implementation for CIM Manage-

ment and Control. CRC Press. Gröger, C., Kassner, L., Hoos, E., Königsberger, J., Kiefer, C., Silcher, S., & Mitschang, B. (2016). The Data-driven Factory. In Proceedings of the 18th International Conference on Enter-

prise Information Systems (pp. 40–52). Portugal: SCITEPRESS - Science and Technology Publications, Lda. https://doi.org/10.5220/0005831500400052 IBM Corp. (2018, April 4). IBM Business Process Manager - Overview - United States. Retrieved April 5, 2018, from https://www.ibm.com/us-en/marketplace/business-process-manager Jablonski, S. (1997). Architektur von Workflow Management Systemen. Informatik-Forschung

Und Entwicklung, 12(2), 72–81. Krumeich, J., Jacobi, S., Werth, D., & Loos, P. (2014). Big Data Analytics for Predictive Manufacturing Control - A Case Study from Process Industry. In 2014 IEEE International Congress

on Big Data (pp. 530–537). https://doi.org/10.1109/BigData.Congress.2014.83 Krumeich, J., Werth, D., Loos, P., Schimmelpfennig, J., & Jacobi, S. (2014). Advanced planning and control of manufacturing processes in steel industry through big data analytics: Case study and architecture proposal. In 2014 IEEE International Conference on Big Data (Big

Data) (pp. 16–24). https://doi.org/10.1109/BigData.2014.7004408 Lee, J., Bagheri, B., & Kao, H.-A. (2015). A Cyber-Physical Systems architecture for Industry 4.0based

manufacturing

systems.

Manufacturing

Letters,

3,

18–23.

https://doi.org/10.1016/j.mfglet.2014.12.001 Lu, Y., Morris, K. C., & Frechette, S. P. (2016). Current Standards Landscape for Smart Manufacturing Systems. NIST Interagency/Internal Report (NISTIR) - 8107. Retrieved from https://www.nist.gov/publications/current-standards-landscape-smart-manufacturingsystems

Luo, H., & Fan, Y. (1999). CIMflow: a workflow management system based on integration platform environment. In 1999 7th IEEE International Conference on Emerging Technologies

and Factory Automation, 1999. Proceedings. ETFA ’99 (Vol. 1, pp. 233–241 vol.1). https://doi.org/10.1109/ETFA.1999.815361 Nagorny, K., Lima-Monteiro, P., Barata, J., & Colombo, A. W. (2017). Big Data Analysis in Smart Manufacturing: A Review. International Journal of Communications, Network and Sys-

tem Sciences, 10(03), 31. https://doi.org/10.4236/ijcns.2017.103003 O’Donovan, P., Leahy, K., Bruton, K., & O’Sullivan, D. T. J. (2015). Big data in manufacturing: a systematic mapping study. Journal of Big Data, 2, 20. https://doi.org/10.1186/s40537015-0028-x Pritschow, G., Altintas, Y., Jovane, F., Koren, Y., Mitsuishi, M., Takata, S., … Yamazaki, K. (2001). Open Controller Architecture – Past, Present and Future. CIRP Annals, 50(2), 463–470. https://doi.org/10.1016/S0007-8506(07)62993-X Rademakers, T. (2012). Activiti in Action: Executable Business Processes in BPMN 2.0. Greenwich, CT, USA: Manning Publications Co. SAP AG. (2018). SAP Documentation - Business Process Engine. Retrieved April 5, 2018, from https://help.sap.com/saphelp_nw73/helpdata/en/08/67668dfb56447f9d13251fba9b 47c7/frameset.htm Scheer, A.-W. (1987). CIM Computer Integrated Manufacturing: Der computergesteuerte Indust-

riebetrieb (2nd ed.). Berlin Heidelberg: Springer-Verlag. Retrieved from //www.springer.com/de/book/9783642970504 Wirth, R. (2000). CRISP-DM: Towards a standard process model for data mining. In Proceedings

of the Fourth International Conference on the Practical Application of Knowledge Discovery and Data Mining (pp. 29–39).

Zhang, Y., Qu, T., Ho, O. K., & Huang, G. Q. (2011). Agent-based Smart Gateway for RFID-enabled real-time wireless manufacturing. International Journal of Production Research,

49(5), 1337–1352. https://doi.org/10.1080/00207543.2010.518743 Zhang, Y., Wang, W., Du, W., Qian, C., & Yang, H. (2018). Coloured Petri net-based active sensing system of real-time and multi-source manufacturing information for smart factory.

The International Journal of Advanced Manufacturing Technology , 94(9–12), 3427– 3439. https://doi.org/10.1007/s00170-017-0800-5 ZVEI.

(2015).

Status

Report

ZVEI

Reference

Architecture

Model.

Retrieved

from

http://www.zvei.org/Downloads/Automation/5305%20Publikation%20GMA%20Status%20Report%20ZVEI%20Reference%20Architecture%20Model.pdf