Making Requirements Speci cations Accessible via Logic, Language and Graphics: A Progress Report Jeremy Pitt and Jim Cunningham
Department of Computing, Imperial College of Science, Technology, and Medicine, 180, Queen's Gate, London, SW7 2BZ, UK. Email: fjvp,
[email protected] URL: http://medlar.doc.ic.ac.uk/jvp/
Abstract
Natural language software tools may have an important role in making requirements speci cations more accessible. Possible tools include text processors to support requirements elicitation, and text generators to support requirements validation. The current paper reports on our progress in developing a natural language generation system, integrating this tool with a graphical interface and an automated reasoning system, and applying it in the domain of requirements validation. The resulting synthesis of logic, language and graphics is an important rst step in developing an intelligent assistant to support a designer in both requirements elicitation and validation.
1 Introduction
The requirements speci cation for any system, written in some combination of diagrams and natural language, may be dicult for a designer or manager to reason about, just because of its complexity. An equivalent speci cation, written in a formal language, can also cause problems. It may be dicult to create, and for those untrained in the formal language, it may still be dicult, if not impossible, to read and understand. Furthermore, there may be interactions between separate parts of the speci cation which produce results that are not only inconsistent with some original intention, but are also not apparent until the speci cation is `evaluated' in some way. Even then, the symbolic results of this evaluation may not be easily understood. There is scope, therefore, for making a formal speci cation more `accessible', in two senses: by making it both easier to create and easier to animate. This is a potentially important application for natural language software tools, for example text processors to support requirements elicitation, and text generators to support requirements validation. In [PCK94], we introduced a system which provided cooperative answers to natural language email queries. This system used a text processor to convert queries into logical form, which were then evaluated against a knowledge base. The answers, still logical statements, were then translated back into natural language by a discourse generator. In this paper, we report on our progress in using the discourse generator for another application { requirements validation. This tool is integrated with a graphical interface and an (another) automated reasoning program, to produce an interactive system which can assist a designer with requirements validation. The general technique we are using for requirements validation is a form of symbolic execution called logical animation. In section 2, we discuss the idea of logical animation, as originally presented in [CCB90], and the advantages of the logical calculus we are now using to implement this idea. In the following section, section 3, we brie y describe our text generation tool and its application, in conjunction with an interactive graphical interface, in supporting logical animation. Throughout, we will illustrate the ideas by reference to the blocks world [GN87]: however, this is Supported by
CEC Esprit Project GOAL (Esprit 6283) and CEC Esprit BRA Medlar II (Esprit 6471).
1
a simple, exemplary scenario, and in section 4.2 we consider how we intend to support reasoning about a speci cation in a more complex scenario, a robotic factory. This scenario is the focus of a case study in the last year of the CEC Medlar II Project (Esprit 6471). Our overall intention is to use the discourse processor for requirements elicitation, so that our two natural language software tools would work in tandem as illustrated in gure 1. Although REQUIREMENTS ELICITATION informal (natural language) requirements specification
discourse processor
lexical translations
formal requirements specification
graphical representations
graphics
KE model builder
interface LOGICAL ANIMATION
USER discourse generator
REQUIREMENTS VALIDATION
Figure 1: Logic, Language & Graphics in Requirements Engineering we have a working discourse understanding system (as described in [PCK94]), its application for requirements elicitation is less advanced than our application of the discourse generation system for requirements validation. In section 5 we outline our approach to the former activity, and draw some further conclusions.
2 Logical Animation
The motivation for logical animation is to deduce and display a logical model of a formal requirements speci cation [CCB90]; in this paper, we will use as our working example a speci cation of the blocks world presented as a logical theory of Modal Action Logic [Mai87]. One deduction method for nding a logical model of such a speci cation is a modal form of semantic tableaux [Fit90], which is especially suitable for automation and providing interaction with the user. The original system used the `standard' tableau rules, however, for the following reasons our implementation uses the free variable KE calculus [DM94, PC95]: Eciency: KE is dierent from the tableau method, although the KE proof procedure is related to tableau with lemmata plus a certain formula selection strategy. However, KE does oer a substantial (space) improvement because the only branching rule decreases the size of the search space; Model building: an essential feature of KE is that all complete open branches are mutually inconsistent. This facilitates our search for minimal models in comparison with the tableau method, which can lead to a redundant enumeration of models (i.e. some of the models may be subsumed by others). This redundancy may not be visible in small examples, but can have a dramatic eect on the enumeration process when lengthier sets are involved; Behavior explanation: The standard tableau method is essentially a `consistency checker', i.e., 2
Sorts: Actions:
BLOCK Unstack: BLOCK BLOCK Stack: BLOCK BLOCK Agents: user Predicates: OnTable BLOCK On BLOCK BLOCK Clear BLOCK Axioms: 8x.OnTable(x) ) 8y.:On(x,y) 8x.OnTable(x) _ 9y.On(x,y) 8x.Clear(x) ) 8y.:On(y,x) 8x.Clear(x) _ 9y.On(y,x) 8x8y.On(x,y) ^ Clear(x) ) per(user,Unstack(x,y)) 8x8y.:On(x,y) ) :per(user,Unstack(x,y)) 8x8y.OnTable(x) ^ Clear(x) ^ Clear(y) ^ x6=y ) per(user,Stack(x,y)) 8x8y.:Clear(x) ) :per(user,Stack(y,x)) 8x8y.per(user,Unstack(x,y)) ) [user,Unstack(x,y)]OnTable(x) 8x8y.per(user,Stack(x,y)) ) [user,Stack(x,y)]On(x,y)
Figure 2: Modal Action Logic speci cation for the Blocks World given a set of formulas ? (of an axiomatic theory) and a single formula , it implements a function f such that: if is consistent with ?; f (?; ) = yes, no, otherwise The problem with the standard tableau method is that not all the rules used to implement f can be easily related to `traditional' forms of reasoning (Swartout [Swa83] reports a similar problem using a theorem prover based on resolution). For example, the branching rule for implication derives its justi cation from the truth-tables for implication in classical logic (only), but is hard to explain to those untrained in formal logic, and hard to motivate the use of the rule in building a model. The explanation of a proof involving such steps requires additional eort (e.g. by translating a tableau proof into natural deduction steps). KE, on the other hand, uses as its analytic rules for implication, precisely those rules which correspond to `practical' forms of reasoning, such as modus ponens and modus tollens. Even the single branching rule of KE is related to the fundamental logical notion of classical bivalence, i.e. every formula is either true or false. All of these rules are intuitive, easily justi ed, and, it is expected, will be easier to explain as a reasoning step in animating a speci cation and explaining its behaviour. The KE Model Builder we have implemented (formal details are included as an Appendix) breaks down each formula in the database (i.e. the speci cation) in order to build a KE-tree in which each branch is an alternative way of satisfying the database, i.e., a logical model in the Herbrand Universe. Having done this, the intention of logical animation is to provide the user with detailed information about the actual state of the system. In practice, this means listing all positive extensions of the predicates that can be derived from the (intensional and extensional) database; semantically, we are trying to build a minimal model [Hin88] using the KE proof procedure. The condition of the KE-tree when the deductive process stops can be as follows: each branch is closed by an inconsistent pair A and :A. The original database is then inconsistent; there are alternative complete open branches, which may be disregarded either because they do not introduce new information, or because they introduce a non-minimal model, or because they add incompleteness; there is only one complete open branch which represents the actual state of a model for the intensional database. 3
Clearly, it is only the latter case which provides the user with information about the actual state of the system being animated, so we restrict our attention to minimal models and disregard branches which introduce incompleteness. To give an example, consider the Modal Action Logic speci cation of the blocks world presented in gure 2. Given only the rst four axioms above and the fact 8x:OnTable(x) (i.e. the initial state whereby all the blocks are on the table), the output from the KE Model Builder is the KE-tree shown in gure 3, in which the only open branch contains the positive literals OnTable(X4) and Clear(X7), where X4 and X7 are free variables, indicating that all the blocks are clear.
Figure 3: KE-tree for Blocks World in the initial state However, if we perform a stack action, e.g. to stack the green block on the blue block, so that the postcondition On(green,blue) is true, a KE-tree can be developed as illustrated in gure 4. fact 1 2 3 4 5 6
axioms 1{4 On(green,blue) OnTable(X1) ! :On(X1,Y1) OnTable(X2) _ On(X2,sk0(X2)) Clear(X3) ! :On(Y3,X3) Clear(X4) _ On(sk1(Y4),X4) :OnTable(green) :Clear(blue)
Rule G, axiom 1 (twice) Rule G, Rule D, axiom 2 Rule G, axiom 3 (twice) Rule G, Rule D, axiom 4 Rule B, fact and 1 Rule B, fact and 3
Figure 4: Development of KE-tree after stack(green,blue) At this point, as no rules apply and the branch will not close, so we apply a frame rule, by which we carry over those aspects of the state of the system which are unchanged by an action to the new state. Here, we add all the (positive) facts which hold in the current state and are consistent with the facts (i.e. positive and negative literals) on the branch. The KE Model Builder then continues making derivations, and if it now stops with an open branch, the positive literals on that branch can be collected and this (minimal model) now represents the new state. In this case we will add the facts that the red block is clear and on the table, the green block is clear, and the blue block is on the table. Derivation continues and an open branch is obtained, with these same facts. 4
3 Discourse Generation
There is evidence from cognitive science [LS87] that the optimal way of conveying maximal information is vision. Hence we look for a graphical presentation of our logical model. However, our experience with the tool of [CCB90] was that, with complicated displays, looking for the eects of changes became a bit like a \spot the dierence" game, with the added complication that the previous display was no longer available for comparison. Therefore, we resolved to use natural language to draw attention to details. Our text generation system consists of four modules: a parser, an inference system, the generator, and a post-processor. These modules are organized serially, and their operation is as follows. The rst module parses logical formulas and converts them into a Prolog structure to be used by the inference system in the second module. This module transforms the semantic structure output from the rst module into a syntactic structure based on a traditional grammar of English [QGLS85]. To do this the inference system exploits a variety of heuristics, e.g. actions are treated as dynamic verbs, predicates as prepositions or adjectives, determiners are chosen according to the quanti ers and where in the logical formula the quanti ed variable is introduced, etc. The third module is the surface generator, which is a Prolog logic grammar run `in reverse' to produce the actual English sentences. The nal module is a post-processor, which is used to analyse the sentences and improve the quality of the output. In processing a formal speci cation, the rst two modules do additional work by constructing all the necessary lexicons for `talking about' that speci cation in English: some interaction with the user may be required at this point, e.g. to translate predicates like OnTable . When the system is initialized the axioms of the speci cation are processed by these two modules. This enables an extensional database, which is a description of the system's initial state, to be converted into English after being processed by the entire system. Subsequent calls to the system are made with a set of (positive and negated) logical formulas which describe the change of state as a result of doing a particular action. These formulas pass through all four modules, and a natural language description produced at the end. At this point we should stress that this natural language generation system was designed as a demonstration of feasibility only. Clearly, it is very limited in comparison with state-of-the-art systems (for a review, see [HZ93]), in that we have had to make too many simplifying assumptions, restricted the domain of application, and resorted at times to ecacious heuristics rather than general principles. However, the system is a useful stepping stone towards a more powerful system which exploits graphical dialogues and multimedia generation. Graphical dialogues were used in the CEC Esprit acord project (see [LKM89]). Here, natural language was combined with graphics in the interface to a knowledge based system. The ow of information between the graphics systems and the dialogue manager ful lled two functions. Firstly, it set up and maintained a semantic mapping whereby pictures were interpretable; and secondly, it caused pictures, whose interpretation was understood, to appear and change, and could register (and react to) a user interaction with respect to a picture. Using this system, a user could construct a graphical representation of objects in his/her system. For example, in the acord demonstrator this included trucks, cars, cities, routes, and loads. The user could de ne icons, and accompany this by a deictic instruction to interpret suchand-such an icon as a certain object. The user could also relate a natural language description to a graphical event, for example by issuing a natural language command such as \interpret this action as `the car goes to the city'", and accompanying this command by a graphical action, e.g. clicking on a car icon and dragging it to a city. The system could then use these descriptions to describe any graphical changes it makes during logical animation. This process of system initialization via graphical dialogues is also a good source of input for multimedia generation, i.e. the generation of natural language and graphics. A good example of such a system is comet [MEF+ 90], which was designed to generate explanations for system maintenance and repair. This system highlighted the requirement for extra underlying knowledge resources in order to relate the graphics to text, and also the need for dierent components each contributing to the overall output, and a sophisticated method for coordinating the interactions 5
of these components to produce coherent output. Considerable research and development eort lies between our current prototype and realizing a system that incorporates the elements described above, but our rst experiments and prototype natural language generation system have demonstrated three general points: the use of natural language generation to supplement graphical representations, to call attention to speci c details, is important and necessary. We would intuitively expect this: for example, art gallery guides do not explain details of paintings merely by blowing up the details; these are instead described in natural language; the use of traditional grammar can be applied to generating as well as to processing natural language. Previously [PCK94], we used traditional grammar in our discourse processing system; the generator shows that the same principles can be useful for structuring text for generation, at least at the sentential level; the use of logic as the basis for generation is highly promising. It may well be that an adequate knowledge representation system for generating English may be closer to the predicate calculus than other formalisms such as semantic networks, which were invented to capture the meaning of natural language. In fact, it is our contention that the representation of natural language semantics is best conducted through the medium of multi-modal logic, which is being actively researched in the Medlar II Project. Multi-modal logics, we argue, are suciently rich and expressive to represent the full range of meanings of events, actions, and behaviour required for our applications, and, in addition to the usual formal advantages of a well-de ned semantics, completeness and consistency, these logics are powerful enough for direct symbolic execution, as in logical animation.
4 Demonstrators
4.1 The Blocks World
A prototype system has been implemented in order to develop and evaluate the ideas in a simple test domain, i.e. the blocks world. The prototype implementation integrates three components: the KE Model Builder, the Natural Language Generator, and a Graphics Interface, which allows menu-driven user interaction. The operation of the system follows two phases: initialization and animation. The sequence of processes performed by each component of the system, and in response to user interactions, is as follows: Initialization: the KE Model Builder applies the tableau rules to the speci cation plus a given extensional database to create the initial state of the system; the Natural Language Generator processes the speci cation to construct the lexicons, and describes the world model in English; the Graphics Interface draws the world model, computes from the speci cation the permissible actions in this state, and makes these available via a menu (the Action Menu); Animation: repeat until the user selects Quit: the user selects an action from the Action Menu; the KE Model Builder starts new tableau with the axioms of the speci cation and the postconditions of the action. When no further KE rules apply on an open branch, the frame rule is invoked, and the KE rules re-applied until a minimal model is constructed; the Natural Language Generator computes the changes from one state to the new state represented by the minimal model. English descriptions of these changes are generated and displayed; the Graphics Interface updates the graphical representation and determines the permissible actions in the new state. The Action Menu is updated dynamically and the user can now select a new action. As discussed above, the blocks world in its initial state is as displayed and described in the left hand screen dump in gure 5. After selecting the action Stack green on blue, the new situation is 6
as shown, and the changes are as described, in the right hand screen dump. (For those watching in black and white, the red block is on the left, the green one in the middle, and the blue one on the right.)
Figure 5: Interactive Graphical Displays for the Blocks World
4.2 The `Car Painting Robots'
To give a more realistic picture of a potential application, we brie y review a case study that is the focus for work in the last year of the CEC Medlar II project, and is the test environment for the tools described in this paper. Consider then the way that model building could be used in designing a `factory of the future'. Figure 6 shows a robotic car painting workplace with various spatial constraints. Initially there are a large number of cars in one workspace (E1) each to be painted a selected colour by one of the robots. A robot is only permitted to paint a car in a place which is free of neighbouring objects, so cars must be moved accordingly. E4 is another space for storing cars, but eventually all cars must be painted and returned to E1. As sketched, there are a myriad of unresolved questions which we will not go into save to indicate the design space of solutions. It is clear that special geometric reasoning is needed, but the major open questions concern the capability of the robots. This in turn determines the extent to which knowledge and planning is distributed, and the structure of the communication system. Even without autonomous robots there is a considerable problem in nding some partial ordering of tasks which will eventually achieve the goal situation. Having autonomous agents makes the conception of a plan more dicult, but does not necessarily complicate the computation of a solution, because with many agents eager to work the search space for tasks of each individual agent can be much smaller. Indeed there are indications that more agents can make the synthesis task more, not less tractable. Now consider what would happen if we had speci ed the logical behaviour of each robot, with or without a global planing agent. Whatever our speci cation, we need to gain con dence in it. One way is to prove some theorems about it, this is traditional deduction. We would like to prove that the goal is eventually achieved, that robots don't crash, respray cars or anything else we would not like (see [AC91] for an indication that this sort of proof is not fantasy.) Unfortunately we cannot think of everything, and nor will robots. We could gain more con dence and validate the whole system by simulating it and interacting with the simulation. Such a simulation would be a restricted form of closure computation, where we circumscribe the class of consequences which we admit, for which we can use the model building techniques described above. Moreover, if we want to interact with the robots, and get them to explain themselves and justify why they are doing certain actions in a certain order, then some form of two-way natural language communication is highly desirable. 7
E3
E2
E1
E4
Figure 6: A Robot Car Painting Factory
5 Further Work and Conclusions
5.1 Requirements Elicitation
The general idea of using discourse understanding for requirements elicitation was motivated by the tell system [HSE87], in which an English speci cation was translated one sentence at a time into a formula of temporal logic, each sentence being treated as a separate semantic unit. However, representing the `meaning' of individual English sentences as isolated expressions of a logic put heavy restrictions on the English syntax, and limited the speci er's `freedom of expression'. Simple direct translation into a collection of logical axioms could not re ect the thematic structure of, or interaction between discontiguous parts of, the English speci cation. In an attempt to overcome these problems, we have been considering the application of our discourse understanding system [PCK94] to an informal requirements speci cation written in natural language. The intention is not to produce a formal requirements speci cation automatically, but to provide automated `intelligent assistance' for the system designer. This Intelligent Assistant would include a discourse understanding system, and would enable the designer to investigate the components and the behaviour of selected parts of the system piece by piece (cf. [Cox94]). The analysis would therefore give feedback for establishing and subsequently validating the formal speci cation, by providing useful information relating to: initial identi cation of agents, actions and system axioms; interactive construction of the graphical representation and lexical paraphrasal of system predicates, for use in the graphics interface and text generation programs to be used in validation (cf. [LKM89]); early indication of inconsistencies via `incremental compilation': not only are axioms identi ed but their eect can be tested by constructing a partial model of the system. There are, though, still great lacunae in both theory and practice between what we can handle and what we must be able to handle. Consider the following extract from the description of the blocks world in [GN87]: 8
there are just three blocks. Each block can be somewhere on the table or on top of exactly one other block. Each block can have at most one other block immediately on top of it. for all blocks [and] If is on and is clear, then, after an unstack operation, is on the table and is clear
:::
:::
x
y :::
x
y
x
x
y
:::
Ideally, the rst sentence would be recognized as a de nition of sorts, the second and third as axioms, and the fourth as an action, which an `intelligent' assistant could `animate' by giving appropriate arbitrary values to x and y and building a model. However, many issues are immediately raised in trying to automatically process the discourse above, even before an attempt to extract the sorts and axioms and animate them (for example, that in this context `clear' means `having no blocks on'; the vacuous meanings of `just' and `somewhere'; etc.). The situation is exacerbated when we consider less tightly constrained, more `real-life' examples, and a working solution now way well lie in the direction of `controlled language' (i.e. disallowing such vague expressions) rather than ever more sophisticated text processors.
5.2 Final Comments
In this paper, we have described methods for model building with free variable KE, logical animation based on model building, and discourse generation not just for stating the results of reasoning, but, in conjunction with graphical descriptions, also to help explain the results of reasoning (a major challenge that we are currently investigating is to how explain the proofs that led to these results). This research has illustrated the ideas that: natural language software tools, plus graphical interfaces, used for both input and output (cf. [LKM89, RK94]) can help to alleviate complexity, and can be used to make requirements speci cations more accessible; the synthesis of logic, language and graphics is important for enabling interactive `intelligent systems' to reason about change, to explain changes, and to show the eects of change. Though we still have some way to go in scaling up and integrating our prototypes, we believe this combination of Arti cial Intelligence technologies can accompany the static illustration of speci cations { as in, for example, a recent application of virtual reality { to enable a more dynamic interaction between a system designer and its speci cation to take place via an `intelligent assistant'.
References [AC91]
W. Atkinson and J. Cunningham. Proving Properties of a Safety-Critical System. IEE
Software Engineering Journal, 6(2):41{50, 1991.
[CCB90] M. C. Costa, R. J. Cunningham, and J. Booth. Logical Animation. In Proc. of 12th International Conference on Software Engineering, pages 144{149, Nice, 1990. IEEE Computer Society Press. [Cox94] B. Cox. Communicating conceptual integriy in distributed systems through intelligent assistance. Omega, 22(2):113{122, 1994. [DM94] M. D'Agostino and M. Mondadori. The Taming of the Cut. Journal of Logic and Computation, 4:285{319, 1994. [Fit90] M. Fitting. First-Order Logic and Automated Theorem Proving. Springer-Verlag, 1990. [GN87] M. Genesereth and N. Nilsson. Logical Foundations of Arti cial Intelligence. Morgan Kaufmann, 1987. [Hin88] J. Hintikka. Model Mimimization { An Alternative to Circumscription. Journal of Automated Reasoning, 4, 1988. 9
[HSE87] H. Horai, M. Saeki, and H. Enomoto. Speci cation-based software development system pure tell. Technical Report Research Report 73, Fujitsu Ltd, International Institute for Advanced Study of Information Science, 1987. [HZ93] H. Horacek and M. Zock, editors. New concepts in natural language generation. Pinter, 1993. [LKM89] J. Lee, B. Kemp, and T. Manz. Knowledge-based graphical dialogue: A strategy and architecture. In Esprit-89 Conference Proceeding. Kluwer, 1989. [LS87] J. Larkin and H. Simon. Why a diagram is (sometimes) worth 10,000 words. Cognitive Science, 11:65{99, 1987. [Mai87] T. Maibaum. A Logic for the Formal Requirements Speci cation of Real-Time Embedded Systems. Alvey Forest Project Deliverable R3, Department of Computing, Imperial College, 1987. [MEF+ 90] K. McKeown, N. Elhadad, Y. Fukumoto, J. Lim, C. LOmbardi, J. Robin, and F. Smadja. Natural laanguage generation in COMET. In R. Dale, C. Mellish, and M. Zock, editors, Current Research in Natural Language Generation, pages 103{138. Academic Press, 1990. [PC95] J. Pitt and J. Cunningham. Theorem proving and model building with the calculus KE. To appear in the Bulletin of the IGPL, 1995. [PCK94] J. Pitt, J. Cunningham, and J. H. Kim. Cooperative answering to natural language email queries. In F. Anger, R. Rodriguez, and M. Ali, editors, Proceedings 7th International Conference on Industrial and Engineering Applications of Arti cial Intelligence and Expert Systems, pages 273{281. Gordon and Breach Science Publishers, 1994. [QGLS85] R. Quirk, S. Greenbaum, G. Leech, and J. Svartvik. A Comprehensive Grammar of the English Language. Longmans, 1985.
[RK94]
R. Rajagopalan and B. Kuipers. The gure understander: A system for integrating text and diagram input to a knowledge base. In F. Anger, R. Rodriguez, and M. Ali, editors,
Proceedings 7th International Conference on Industrial and Engineering Applications of Arti cial Intelligence and Expert Systems, pages 273{281. Gordon and Breach Science
[Swa83]
Publishers, 1994. W. Swartout. The GIST Behavior Explainer. In Proceedings of AAAI-83, pages 402{ 407, Washington, 1983. Kaufmann.
Appendix: KE Speci cation
A formal speci cation of the KE rules used in our implementation is given in gure 7. A branch of a KE-tree is represented by a 4-tuple (,,?,n) where is the set of unanalysed formulas and formulas that have been analysed x times, is the set of analysed formulas, ? is the set of formulas that have been analysed x + 1 times, and n is the number of times more that the rule can be applied on the branch. Something akin to Smullyan's unifying notation is used to represent formulas more compactly:
10
1 2 1 2 a1 ^ a 2 a1 a2 :(b1 ^ b2 ) bc1 bc2 c :(a1 ! a2) a1 a2 b1 ! b2 bc1 b2 c c :(a1 _ a2 ) a1 a2 b1 _ b2 b1 b2 ::a a 11 12 21 22 e1 $ e2 e1 e2 ec1 ec2 :(e1 $ e2 ) e1 ec2 ec1 e2 so that in Rules A, B, Bs and E the following schemata are used, where i 2 f1; 2g and if j = 1 then k = 2 and vice versa: Schema A: Schema B: jc 1 2 k Schema E: ij ik The proof procedure used in the KE Model Builder is shown in gure 8. Schema Bs: i
11
Move Literal: [ flg; ; ?; n ; [ flg; ?; n Closure: ; [ fl; lcg; ?; n Rule A:
2
[ fg; ; ?; n [ f1; 2g; [ fg; ?; n
Rule Bs:
[ f g; ; ?; n ; [ f g; ?; n
if i 2 [ [ ?
[ f g; ; ?; n [ f k g; [ f g; ?; n
if jc 2 [ [ ?
[ fg; ; ?; n [ fikg; ; ?; n
if ij 2 [ [ ?
Rule B: Rule E:
Rule D:
[ f9x:g; ; ?;n [ f[x=ski(Xj )]g; [ f9x:g; ?; n
for ski a new skolem constant, Xj the free variables in
Rule G:
[ f8x:g; ; ?;n [ f[x=Xi]g; ; ? [ f8x:g; n ? 1
if n > 0, and for Xi a new free variable
; ; ?; n [ f i g; ; ?; n [ f ic g; ; ?; n
if 2 or 2 (and let i = ij )
PB:
Restart Gamma:
; ; ?; n [ ?; ; ;; n
if n > 0, and there is no 2
Figure 7: Formal Speci cation of free variable KE
1 if one of rules move literal, A, Bs, B, E, D or G applies to the rst formula in , apply it; 1:1 if closure applies, end with close; 1:2 otherwise repeat from 1; 2 if there is a or formula in , apply PB; 2:1 process both branches (i.e. start from 1 with each branch): 3 if n > 0 apply restart gamma and repeat from 1; 4 if n = 0, end with open. Figure 8: Proof Procedure for the KE Model Builder 12