ECAI'04, 1-th European Conference on Artificial Intelligence - Workshop 28 : MONET Workshop on Model-Based Systems - 22-27 August 2004, Valencia, Spain - pp. 45-50
A Three Dimensional Framework for Comparing Diagnosis Knowledge-Based Applications Cecilia Zanni and Marc Le Goc and Claudia Frydman1 Abstract. This paper proposes a framework to analyse, represent and compare knowledge-based applications for diagnosis. It defines a three-dimensional space in which an application may be represented by a point, whose coordinates are defined on each of the 3 axes corresponding to the conceptual, the functional and the technical dimensions of it. The conceptual dimension focuses on the problem solving method, while the functional dimension relates to the way in which causality is represented in the models. Finally, the technical dimension describes the computation techniques used to implement the problem solving method. Describing the applications according to this framework allows to easily observe and analyze the similarities and differences among them. As an application, we propose the positioning in this framework of fourteen selected diagnosis applications frequently cited in the AI literature. As a conclusion, we show that the diversity among diagnosis knowledge-based applications mainly lies at the conceptual dimension because the other two are strongly correlated.
1
INTRODUCTION
This paper proposes a framework to analyse, represent and compare knowledge-based applications to diagnosis. Its objective is to get a unique description system that allows representing every diagnosis application as a global entity according to a systemic approach. With this goal in mind, every application is described in a three dimensional space: a conceptual dimension, a functional dimension and a technical dimension. This space is discrete and qualitative: the values on each axis qualify the conceptual, functional and technical properties of an application. Every application is, then, represented by a point in this space. The similarities and the differences among the different applications are observed by identifying the common (similarities) and different (differences) values along each axis. Section 2 in this paper describes the framework and enumerates the applications to be analyzed. Sections 3, 4 and 5 describe each of the dimensions in the framework, while Section 6 shows the positioning of the sample applications in this three-dimensional space. Finally, Section 7 states our conclusions. 1
LSIS UMR CNRS 6168 - Domaine Universitaire de St. Jerˆome - 13397 Marseille Cedex 20 - France - Telephone: +33 491 05 60 37 - Fax: +33 491 05 60 33 - email:
[email protected],
[email protected],
[email protected]
2
THE FRAMEWORK
We propose a framework that is inspired on previous works concerning the definition of a unifying framework for describing problem solving methods [7], [1], [33], to which we add a systemic point of view of the modeling task. The Systems Theory [16] defines the model of a system as an association of three models: the structural model (also called the ontological model) that contains the knowledge about the components of the system and their organisation in structures, the functional model that defines the variables and their role in the models, and the behavioral model that contains the knowledge about the states and the events of the system (and also specifies the time structure when describing a real time system). We integrate this three types of knowledge in our framework so that a diagnosis application will be described within a three dimensions space: • The Technical Dimension (associated with the Structural Model), at which we describe the nature of the underlying theory for representing and exploiting the knowledge (technical calculus or computation). This dimension describes the different computation techniques that are used to implement the functions of an application. • The Functional Dimension (associated with the Functional Model), at which we describe the nature of the models or, in other way, how causality is represented (causality is represented as relations between variables values). The functions represent the set of relations between a subset of variables values and another. • The Conceptual Dimension (associated with the Behavioral Model), at which we describe the nature of the knowledge source and the nature of the problem solving method. The conceptual dimension of an application describes the different states of resolution of the problem. This three dimensions can be materialized by three axes. On each axis, an application is described by a value (a label) that qualifies the application according to the dimension associated with that axis. Consequently, the three dimensions are defined with three sets of values so that an application is defined by three labels, one for each axis. The values set that is used to define an axis is not ordered, but its elements must represent non overlapping concepts. In order to instanciate our framework, we build the three sets of values from the analysis of a representative set of diagnosis applications [6]. In [25], the authors revise the state of the art in knowledge-based diagnosis, and also propose a sample set of applications frequently cited in the AI literature. For our particular application, we choose a subset of that sample, that consists of the following applications: MYCIN [13], [4] HT [29], [30], GDE [12], TEKNOLID [3], [9], MOLE [20], ALEXIP [2], [27], [32], AUSTRAL [17], [26] IXTET
[8], MIMIC [11], the off-line operation of GASPAR [5], [15], [21], [22] CA-EN [23], [24], [28], DIAPO [19], the Diagnosis Template in CommonKADS [33] and FAULTY II [31].
3
THE TECHNICAL DIMENSION
4.1
Causality is implicit in the sense of a pairing between ”observed symptoms” and ”causes”. Only the concept of ”associative matching” supports the diagnosis reasoning. As an example of this way of representing causality, let’s consider MYCIN rules [18] (Figure 1).
The Technical Dimension interest is the calculation techniques used to implement the problem solving methods and thus the reasoning mechanisms. We have identified three main groups of technical computations: • ”Logical Computation Techniques”, which rely on the exploitation of theorem demonstration capabilities, based on modus ponens, for example. • ”State Machine Computation Techniques” that are based on the exploitation of finite state automata and of simulators used for recognition (logical and temporal contraint satisfaction). • ”Algebraic Computation Techniques”, which exploit qualitative behavioral models simulators.
3.1
Logical Computation Techniques
They are used, mainly, as support for the Heuristic approach to Diagnosis, for Consistency based diagnosis and for Abductive reasoning. These techniques include [6]:
IF the infection is primary-bacteremia AND the site of the culture is one of the sterile sites AND the suspected portal of entry is the gastrointestinal tract THEN there is suggestive evidence (0.7) that infection is bacteroid
Figure 1. Associative matching in MYCIN
4.2
Logical Models
In this kind of models, ”causes” are related to ”consequences” via a logical connector (typically, the ⇒ connector), including, eventually, a temporal modality. An example of Logical Models can be found in [14] regarding the failure models exploited by the DIAPO system for diagnosing the coolant pump sets in EDF nuclear power plants (Figure 2).
• First Order Predicate Calculus • Non Monotonic Reasoning, Assumption Truch Maintenance Systems or ATMS • Expert Systems, Knowledge Based Systems, Case Based Reasoning Systems
3.2
Associative Models
Shaft vibration
Breakdown of the pump bearing knee joint
Blocking of the pump bearing knee joint
High D1/D2 ratio
Breakdown of the pump bearing knee joint Blocking of seal-1
State Machine Computation Techniques
High D1/D2 ratio
Decrease of QFJ1 flow
High D1/D2 ratio and Decrease of QFJ1 flow are symptoms . Blocking of the pump bearing knee joint, Breakdown of the pump bearing knee joint and Blocking of seal -1 are diagnostic hypotheses; and are abstract condit ions.
These techniques are used in Recognition Based Diagnosis and in Discrete Models; and include [6]:
Figure 2. Logical correlation in DIAPO
• Constraint Satisfaction engines • State Machines
3.3
Blocking of the pump bearing knee joint
Primary water rise
Algebraic Computation Techniques
These techniques make possible the simulation of the different possible system behaviors in its different behavior modes. These techniques are adapted to use when alarm generation or interpretation is needed. They are used in Simulation Based Diagnosis and include, among others [6]: • Qualitative Calculus and Simulation • Algebraic equations (linear or non linear), first order linear differential equations or recurrent equations (linear or non linear)
4.3
State Transition Models
Causality is represented by an order relation and a temporal correlation. Let’s consider the behavioral model of the components of a telecommunications network in GASPAR [10] as an example of this kind of models (FigureThis dimension gives us the needed variety to diagnosis applications design at knowledge level. 3).
-(mr,pr) state 1
state 2 +((me,pe),d1,d2))
4
THE FUNCTIONAL DIMENSION
When the component receives message mr on port pr at time t, it changes instantly from state1 to state 2 and sends message me on port pe at time t+d, where d belongs to the interval [d1,d2]
The Functional Dimension relates to the way in which causality is represented in the models and thus to the languages of knowledge representation. We have identified four functional models: • • • •
”Associative Models” ”Logical Models” ”State Transition Models” ”Algebraic Models”
Figure 3.
An order relation representing causality in GASPAR
4.4
Algebraic Models
5.2
Causality is compiled in the form of algebraic expressions. The equation model corresponding to the FEP nozzle system managed by CAEN [23] is an example of this way of representing causality (Figure 4). Variables TNH : high pressure shaft speed TSRNZ : nozzle reference position TSNZ : nozzle position TANZ : nozzle stroke CMD servo current
Equations TSRNZ(k) = TSRNZ(k -1) + (nr(1) + nr(2)) * TNH(k) + nr(3) * TNH(k-1)
The ”Predict & Compare” Inference Structure
Figure 6 shows the ”Predict & Compare” Inference Structure. Consistency based approaches to diagnosis and Discrete Models rely on an explicit model of correct behavior of the device. It is used to predict the correct behavior of the process from the data input in order to compare them with the observed behavior. The evidence of a contradiction between the observed and predicted behaviors allows generating causes hypothesis (”compare & generate” inference). These hypothesis are evaluated (”evaluate” inference) taking into account the real world (the observed behavior), to retain only the causes that are non-contradictory with the observations.
(NZ.1)
TANZ(k) = na(1) * TANZ(k -1) + na(2) * (TSRNZ(k) + TSNZ(k)) – na(3) * (TSRNZ(k -1) + TSNZ(k-1)) - na(4)
(NZ.2)
TSNZ(k) = ns(1) * TSNZ(k -1) + ns(2) * TANZ(k-1)
(NZ.3)
correct behavior models
predict
Figure 4. Algebraic knowledge used in CA-EN predicted behavior
compare & generate
5
THE CONCEPTUAL DIMENSION
As stated earlier, the Conceptual Dimension focuses on the problem solving methods, and it is represented under the form of inference structures. From the study of the state of the art in [25] and [6], we are able to deduce five inference structures: • • • • •
5.3 The ”Heuristic” Inference Structure
Figure 5 shows the ”Heuristic” Inference Structure. Every diagnosis carried out according to a heuristic approach rests on a collection of ”associative” assertions of the form ”symptoms → causes (faults, diseases, etc)”. These assertions are provided by experts and more or less directly exploited by the diagnosis application. At the conceptual level, this type of diagnosis is based on the matching (heuristic association) between symptoms and causes. As expert knowledge is generally formulated in an abstract way, two dual inferences (”abstract” and ”reify”) are necessary to the interpretation of the observations and the formulation of solutions adapted to the particular case. associative model
associate
abstract symptoms
abstract causes
abstract
reify
device data
diagnosis
Figure 5. The ”Heuristic” Inference Structure
hypothesis
observe
evaluate
device data
diagnosis
Figure 6. The ”Predict & Compare” Inference Structure
”Heuristic” ”Predict & Compare” ”Observe & Explain” ”Tracking & Interpretation” ”Hypothesis Generation & Hypothesis Discrimination”
5.1
observed behavior
The ”Observe & Explain” Inference Structure
From the point of view of their conceptual dimension, the diagnosis applications which implement abductive reasoning are close to the consistency based applications (see [1] for a detailed analysis). In both cases, a model is used to predict a behavior which is compared with the observed behavior of the device. The difference lies in the nature of the model: abductive reasoning requires a model of correct and faulty behaviors, whereas only the model of the correct behaviors is necessary to consistency based diagnosis. Consequently, the ”Observe & Explain” inference structure (Figure 7) which describes an abductive reasoning diagnosis, is similar to the ”Predict & Compare” inference structure (Figure 6). The principal difference comes from the behavioral models required by the reasoning mechanism. The ”Observe & Explain” inference structure shows that the correct and faulty behavioral models and the observed behavior allow predicting an anomaly (”predict” inference). The comparison between the observed behavior and the predicted anomaly allows the generation of first hypothesis (”compare & generate” inference) which are evaluated (”evaluate” inference) from observations carried out on the device. The ”diagnosis” corresponds to the validated hypothesis. The ”Predict & Compare” and the ”Observe & Explain” inference structures also differ when considering the knowledge for controling the problem solving process, that is to say in the way to enumerate the behavior model for the prediction, which corresponds to the task layer in CommonKADS. It is the only time that such a description could be required in order to emphasis the difference between the two inference structures [1].
fault models
correct & failure behavior models
generate hypothesis
predict
predicted behavior
fault behavior models
compare & generate
simulate
observed behavior
hypothesis
observe
evaluate
device data
diagnosis
predicted behavior
matching
observed behavior
diagnosis
observe
Figure 7.
The ”Observe & Explain” Inference Structure device data
Figure 8. The ”Tracking & Interpretation” Inference Structure
5.4
The ”Tracking & Interpretation” Inference Structure
Qualitative models represent the physical system to be diagnosed in a dynamic qualitative model. Two tasks maintain the model. The tracking task advances the state of the model in steps with observations from the physical system. The diagnosis task, upon identifying a particular fault, injects that fault into the current model so that the predictions of the model will continue to agree with actual observations. Tracking and diagnosis are accomplished by a ”hypothesize and match” cycle that has three main steps, as shown in Figure 8 (”Tracking & Interpretation” Inference Structure) [11]. 1. The ”generate hypothesis” inference: Observations from the physical system may evoke fault hypotheses via a kind of decision tree. The fault hypotheses are in the form of specific failure modes (such as a stuck pressure regulator or an abnormal set point) and are ordered by likelihood. 2. The ”simulate” inference: Each new fault model is first initialized from the observations that evoked its construction, thus establishing the initial state of the model. The model is then simulated incrementally as observations change, predicting the immediate successor states. 3. The ”matching” inference: A similarity function computes the similarity between the observations and a state of the model. The comparison is based on both qualitative and quantitative values. For similarities above a threshold, the model is retained as a plausible reflection of the physical system’s condition. Below the threshold, the model is discarded.
5.5
The ”Hypothesis Generation & Hypothesis Discrimination” Inference Structure
Any diagnosis carried out according to the Task Oriented approach (”Hypothesis Generation & Hypothesis Discrimination” inference structure) aims at modelling the behavior of problem solving, in terms of the knowledge which is employed for the problem solving itself. The diagnosis consists of 2 sub-tasks ([1], [31], [33]) that are (Figure 9): 1. The ”generate hypothesis” inference which searches backward through a causal network to find potential causes of the observed behavior. This inference is executed until no more hypotheses can be found.
2. The ”discriminate hypothesis” inference which testes candidates in the set of hypothesis obtained by the ”generate hypothesis” inference through an observable entity that may be used to evaluate the candidates and also to rule out other hypotheses
model
generate hypothesis
hypothesis
observed behavior
discriminate hypothesis
diagnosis
Figure 9.
6
The ”Hypothesis Generation & Hypothesis Discrimination” Inference Structure
POSITIONING IN THE THREE-DIMENSIONAL SPACE
Figure 10 show the positioning of the applications in the threedimensional space defined in Section 2. It is to be noted that FAULTY-II and the Diagnosis Template in CommonKADS may be mapped onto every functional model and onto every computation technique,because they are purely conceptual applications. The projection of Figure 10 onto each of the three planes leads to Figures 11, 12 and 13. Projection on the TD-FD plane reveals only four categories of applications, when compared with the seven categories in Figure 10. This implies that diversity is low, applications cannot be differentiated properly when described only on the technical and functional side (”LCT” can be used to implement ”AM” and ”LM”). On the other side, Figure 12 and Figure 13 reveal the same quantity of categories or clusters of applications as found in Figure 10. The common axis to the 3 figures is the Conceptual Dimension one.
Conceptual Dimension
Conceptual Dimension
HG, HD
HG, HD
Diagnosis Template in CommonKADS, Faulty-II Tracking & Interpretation
Tracking & Interpretation
Diagnosis Template CommonKADS, Faulty-II
Observe & Explain Mole, Diapo, Teknolid
Observe Mole, & Explain Diapo, Teknolid
Ca-En, Mimic
Predict & Compare
Gaspar (off line)
Ca-En, Mimic
Predict & Compare
HT, GDE
Gaspar (off line)
Heuristic Alexip, Mycin Austral, IxTet
HT, GDE Heuristic Alexip, Mycin
LCT
SMCT
AM
LCT
LM Austral, IxTet
SMCT
Technical Dimension
SM
ACT
QM
Figure 13. Positioning of the applications on the CD-TD plan ACT
Functional Dimension Technical Dimension
Figure 10. Positioning of the applications in the three-dimensional space
Alexip, Mycin
LCT
AM LM SM
SMCT
QM
HT, GDE, Teknolid, Mole, Diapo
ACT
Functional Dimension
Austral, IxTet, Gaspar (off line)
Technical Dimension
Ca-En, Mimic
Figure 11.
Positioning of the applications on the TD-FD plane
Conceptual Dimension
HG, HD
Diagnosis Template in CommonKADS, Faulty-II Tracking & Interpretation
Ca-En, Mimic
Observe & Explain
Mole, Diapo, Teknolid Gaspar (off line)
Predict & Compare HT, GDE Heuristic Alexip, Mycin Austral, IxTet
It may be seen, then, that the conceptual dimension is the most discriminating: it is from this point of view that the systems differ the most. The result that the conceptual dimension allows the best descrimination is not really surprising: several applications sharing a modelling/simulation methodology will not be discriminating until they are combined with a problem solving method. From the positioning of a sample set of these applications in this framework, we observe that there are six classes or clusters of applications, easily identified as follows: 1. 2. 3. 4. 5. 6.
(MYCIN, ALEXIP)-like applications (AUTRAL, IXTET)-like applications (HT, GDE)-like applications (off line GASPAR)-like applications (MOLE, DIAPO, TEKNOLID)-like applications (CA-EN, MIMIC)-like applications
There is an extra cluster, formed by the so-called Task-Oriented Applications (Diagnosis template in CommonKADS and FAULTY-II), that are not directly related to the others, as they only exist at conceptual level. These approaches of diagnosis derive from a general definition of a diagnostic problem in a top-down perspective ([1]), which is compatible with the approaches used in the studied applications. Inversely, [7] uses a bottom-up perspective when inducing ”Task Structures” from an implementation point of view. But efforts must still be done in order to connect these two analytical methodologies ([33]). Another interesting question is related to the empty space within the three dimensional grid. For example, is there an application that could be defined with (SMCT, SM, Tracking & Interpretation)? These issues require further investigation.
AM LM SM QM
Functional Dimension
Figure 12. Positioning of the applications on the CD-FD plan
7
CONCLUSIONS
This paper proposes a representation framework for classifying intelligent diagnosis applications. This representation framework lies on a three-dimensional space, where each dimension describes different characteristics of the ap-
plications, such as the conceptual, functional and technical ones. The functional and the technical dimensions are strongly correlated. The conceptual dimension is the most effective for discriminating the applications, because it offers a wider variety of values. If we want this framework to be really operational and discriminating, we should remove the correlation between the functional and the technical dimensions. We are currently working on a modified definition of the framework, by removing the technical dimension and replacing it by a phenomenological one, where we will describe the nature of the phenomena to be diagnosed
[21] [22] [23]
[24]
REFERENCES [25] [1] Bredeweg B., ‘Model-based diagnosis and prediction’, in J. Breuker and W. Van de Velde (Ed.), The CommonKads Library for Expertise Modelling, IOPress, 121/153, (1994). [2] Cauvin S.and Braunschweig B., ‘Graphical knowledge representation in the ALEXIP system for petrochemical process supervision’, Application of Artificial Intelligence in Engineering, Vol 2, 219–233, (1993). [3] Alonso Gonzalez Carlos Pulido Junquera Belarmino and Gerardo Acosta, ‘On line industrial diagnosis: an attempt to apply artificial intelligence techniques to process control’, Springer Lecture Notes in Artificial Intelligence, 1415, 804–813, (1998). [4] Buchanan B.G and Shortlife E.H., Rule-based Expert Systems. The Mycin Experiments of the Stanford Heuristic Programming Project, Addison Wesley, Reading, Massachusetts, 1984. [5] F. Bibas S. Cordier M.-O. Dague P. Dousson C. L´evy and Roz´e L., ‘Scenario generation for telecommunications networks’, IJCAI Workshop on AI in Distributed Intelligent Networks, (1995). [6] Zanni Cecilia, Proposition of a Conceptual Framework for the Analysis, Classification and Choice of Knowledge Based Diagnosis Systems - PhD Thesis, Universit´e d’Aix-Marseille III, Marseille, France, 2004. [7] B. Chandrasekaran and T. R. Johnson, ‘Generic tasks and task structures: History, critique and new directions’, in J.-M. David, J.-P. Krivine and R. Simmons (Ed.), Second Generation of Expert Systems, Springler-Verlag, 232/272, (1993). [8] Dousson Christophe, ‘Alarm driven supervision for telecommunication network: Ii - on-line chronicle recognition’, Annales des Telecommunications, tome 51, no 9-10, 501–508, (1996). [9] Acosta Lazo G. G. Alonso Gonzalez C.J. and Pulido Junquera B., ‘Diagnosis basada en conocimiento de un proceso azucarero con TEKNOLID’, International Sugar Journal, Vol 103 Issue 1225, 44–51, (January 2001). [10] M.-O. Cordier and C. Dousson, ‘Alarm driven monitoring based on chronicles’, SafeProcess 2000, 286–291, (2000). [11] Dvorak D. and Kuipers B., ‘Model based monitoring of dynamic systems’, 11th International Joint Conference on Artificial Intelligence, 1238/1243, (1989). [12] de Kleer J. and Williams G., ‘Diagnosing multiple faults’, Artificial Intelligence, 32, 97–130, (1987). [13] Shortlife E., ‘Computer-based medical consultations: MYCIN’, American Elsevier, (1976). [14] Cauvin S. et al, ‘Monitoring and alarm interpretation in industrial environments’, AI Communications, 11-3-4, 139–173, (1998). [15] Bibas S. Cordier M.-O. Dague P. Dousson C. L´evy F. and Roz´e L., ‘Alarm driven supervision for telecommunications networks: Ioffline scenario generation’, Annales des Telecommunications, Vol. 9/10- (tome 51), 493/500, (1996). [16] Le Moigne J.-L., La th´eorie du syst`eme g´en´eral - Th´eorie de la modelisation, Presse Universitaire de France, 1984. [17] Bredillet P. Delouis I. Eyrolles P. Jehl O. Krivine J.-P. and Thiault P., ‘The AUSTRAL expert system for power restoration on distribution systems’, ISAP-94, 295–302, (1994). [18] Lauri`ere Jean-Louis, ‘Repr´esentation et utilisation des connaissances premi`ere partie : Les syst`emes experts’, Technique et Science Informatiques, Vol.1 Num´ero 1, 25–42, (1982). [19] Porcheron M. Ricard B. Busquet J.L. and Parent P., ‘DIAPO: A case study in applying advances ai techniques to the diagnosis of a complex system’, European Conference on Artificial Intelligence (ECAI), (1994). [20] Eshelman L., ‘MOLE: A knowledge-acquisition tool for cover-and-
[26]
[27]
[28] [29] [30] [31] [32] [33]
differentiate systems’, Automating Knowledge Acquisition for Expert Systems, 37–80, (1988). Roz´e L., ‘Supervision of telecommunications networks: A diagnoser approach’, International Workshop on Principles of Diagnosis (DX’97), 103/111, (1997). Roz´e L. and Cordier M.-O., ‘Diagnosing discrete-event systems: an experiment in telecommunications networks’, 4th. International Workshop on Discrete Event Systems, 130–137, (1998). Trav´e-Massuyes L. and Milne R., ‘Diagnosis of dynamic systems based on explicit and implicit behavioural models: An application to gas turbines in esprit project TIGER’, Applied Artificial Intelligence Journal, 10/3, (1996). Trav´e-Massuyes L. and Milne R., ‘TIGER: Gas turbines condition monitoring using qualitative model based diagnosis’, IEEE Expert Intelligent Systems and Their Applications, Vol. 12, Number 3, 22–31, (1997). Zanni C. Le Goc M. and Frydman C., ‘Diagnosis problem solving in SACHEM’, CIMCA’03, (2003). Cordier M.-O. Thiebaux S. Jehl O. and Krivine J.-P., ‘Supply restoration in power distribution systems: a reference problem in diagnosis and reconfiguration’, 8th.International Workshop on Principles of Diagnosis DX’97, 286–291, (1997). Cauvin S. Braunschweig B. Galtier P. and Glaize Y., ‘Model-based diagnosis for continuous process supervision: The ALEXIP experience’, Engineering Applications of Artificial Intelligence, Vol. 6 Nbr. 4, 333– 343, (1993). Trav´e-Massuyes L. Escobet T. Pons R. and Tornil S., ‘The CA-EN diagnosis system and its automatic modeling method’, Computaci´on y Sistemas Journal, Vol 5 Number 2, 128–143, (2001). Davis Randall, ‘Reasoning from first principles in electronic troubleshooting’, International Journal of Man Machine Studies, 19, 403– 423, (1983). Davis Randall, ‘Diagnostic reasoning based on structure and behavior’, Artificial Intelligence, 24, 347–410, (1984). Benjamins Richard, Problem solving methods for diagnosis - PhD Thesis, University of Amsterdam, Amsterdam, Netherlands, 1993. Cauvin S., ‘Action plans dynamic application in the alexip knowledgebased system’, 2nd Workshop on Computer Software Structures Integrating AI/KBS Systems in Process Control, 147–152, (1994). Schreiber G. Hakkermans H. Anjewierden A. de Hoog R. Shadbolt N. Van de Velde W. and B Wielinga, Knowledge engineering and management - The CommonKADS methodology, MIT Press, 2000.