f Incorporating Partial Deduction in Structural Synthesis of Programs Technical Report 1994
John Krogstie and Mihhail Matskin 1 Knowledge Systems Group,
Faculty of Electrical Engineering and Computer Science, The Norwegian Institute of Technology, University of Trondheim, Norway
This research is supported in part by the ESPRIT BRA COMPUNET/NFR contract # 469.92/011 and by the Human Capital and Mobility NFR contract # 101341/410.
Incorporating Partial Deduction in Structural Synthesis of Programs John Krogstie and Mihhail Matskiny
Department of Computer Systems and Telematics The Norwegian Institute of Technology, Trondheim, Norway email:
[email protected],
[email protected]
Abstract
We introduce in this paper the notion of partial deduction in the framework of structural synthesis of programs (SSP). The basic principles of SSP and partial deduction for logic programs are described, and based on this, partial deduction for unconditional computability statements in SSP is de ned. Completeness and correctness of partial deduction in the framework of structural synthesis of programs is proved. In addition to this, several tactics and stopping criteria are suggested, and a general algorithm for partial deduction in SSP is outlined.
1 Introduction Whether you are a user of commercial information systems or a software developer, the need to dramatically improve the quality and productivity of software development has become a competitive necessity. Many technologies and techniques are vying for the status of silver bullet [5]. Among these are object orientation, reuse, formal speci cations and software factories. It is likely that each one will contribute to improving development practice, but none of them alone will answer the essential diculties in software development. Another candidate, which somehow facilitates all the others, are the area which are called knowledge based software engineering (KBSE)[17] where methods from AI are integrated with those from software engineering. Brie y stated, KBSE technology applies knowledge-based representation and processing techniques to the development of conventional software. According to Johnson[8] KBSE has the following advantages:
It helps decompose complex development processes into small, understandable steps
that can be automated. It provides a way to de ne and enforce approved processes.
This research is supported in part by the ESPRIT BRA COMPUNET/NFR contract # 469.92/011 and
by the Human Capital and Mobility NFR contract # 101341/410. y On leave of absence from the Institute of Cybernetics of Estonian Academy of Science, Tallinn
1
It represents speci cations in a form that is independent of the execution architecture,
increasing the potential for reuse. It helps accumulate knowledge in a highly reusable form, so it is well suited for software factories. It provides a vehicle for continually improving our understanding of complex transformations, thereby increasing the automation of software processes over time. It complements the object-oriented paradigm by ensuring that holistic systems requirements are met and that system properties ful ll nonfunctional requirements. Taken together, these capabilities imply a software development environment that can substantially advance productivity and quality. In the last 10 years the KBSE research community has come up with solutions to many dicult problems. The resulting \point" tools for software re-engineering and for building CASE products are now in use. Meanwhile, research continues on how to integrate these point capabilities into a comprehensive, full life-cycle environment, to ful ll the scenario being outlined by Lowry for systems engineering in the next century [16] . The trend towards open CASE environments will allow the software community to adopt KBSE in phases, and the emergence of commercial object-management systems will hopefully make the vision of a complete, integrated KBSE environment a reality. In this paper, we will present two \point" technologies: Structural synthesis of programs(SSP), developed by Tyugu [24, 25], and partial deduction (PD), originally developed by Komorowski [11, 13], and investigate how the notion of PD can be applied in the framework of SSP. Since SSP is propositional whereas PD is de ned for logic programming, the de nitions and results from partial deduction can not be automatically transferred to SSP. We will still give a presentation of PD including parts speci c to dealing with logic programs, since our set of de nition will be made in a similar spirit.
1.1 Outline of the paper Section 2 contains an introduction to SSP. An introduction to PD is given in section 3. Those familiar with these techniques may skip these sections and start directly on section 4 where the main results from applying the notion of PD in the SSP framework can be found. Algorithms and tactics for the synthesis and possible stopping criteria are considered and illustrated with examples. Some concluding remarks are given in section 5, where also some pointers to possible further work are given.
2 Introduction to Structural Synthesis of Programs (SSP) The following introduction is based on [19, 25]. The main idea of SSP is that a proof of solvability of a problem can be built and that the overall structures of a program can be derived from the proof, knowing very little about the actual properties of the functions used in the program. 2
2.1 Logical language (LL) To prove theorems, the language LL is used. It has three kinds of formulae: 1. Propositional variables: A,B,C .... A propositional variable corresponds to some variable from the source problem, and it expresses the fact that a value of this variable can be computed. A propositional variable will also be termed an atom. 2. Unconditional computability statements:
A1 ; : : :; Ak ! B (A and A1;k will be used as short-hands for A1 , : : : , Ak ) This expresses the computability of a value of the variable b corresponding to B from values of a1 : : : ak corresponding to A. A will be termed the body of the statement, whereas B is termed the head. Thus body (A ! B ) = A and head (A ! B ) = B. 3. Conditional computability statements (A1 ! B 1 ); : : :; (An ! B n ) ! (C ! D) ((A ! B) will be used as a shorthand for (A1 ! B 1), : : :, (An ! B n )). A conditional computability statement such as (A ! B) ! (C ! D) expresses the computability of d from c depending on the computation of b from a (A ! B) will be termed the body of the statement, whereas (C ! D) will be termed the head. The functions head and body are de ned as above, whereas body(body((A ! B) ! (C ! D))) = A etc. All four combinations of body and head are de ned in this manner. The intuitionistic completeness of LL has been proven [21], thus LL is equivalent to the intuitionsitic propositional calculus. Besides that the formulae is simple enough to allow an ecient search of proof needed for automatic program synthesis.
2.2 Structural Synthesis Rules (SSR) A sequent notation is used for the derivation rules. A sequent ? ` X, where ? is a list of formulae and X is a formula means that the formula X is derivable from the formulae appearing in ?. The following synthesis rules are de ned: 1. (! -)
` A ! V;? ` A ?`V Here ? ` A is a shorthand for ? ` A ; : : :; ?n ` An 1
1
3
2. (! +)
?; A ` B ?`A!B
3. (! - -)
` (A ! B) ! (C ! V ); ?; A ` B ; ` C ?; ` V Here ?; A ` B is a set of sequents for all (A ! B ).
2.3 Program extraction
We write A ! f B where f is a term representing the function which computes b from a1,: : : ,an. The formula A ! f B is equivalent to 8xA(x) ! B (f (x)) Analogously, we write: (A ! g B ) ! (C F??! (g;c) D ) The formula is equivalent with 8g (8xA(x) ! B (g (x))) ! 8y (C (y ) ! D(F (g; y ))) The resulting language is termed LL1. The inference rules (SSR1) after syntactical amendments reads: 1. (! -) 2. (! +)
` A(a) !f V ; ? ` A(a) ? ` V (f (a)) ?; A(a) ` B (b) !B ? ` A a:b
3. (! - -)
` (A !g B) ! (C F !g;c V ); ?; A(a) ` B(b); ` C (c) ?; ` V (F (a:b; c)) (
)
This gives a functional expression in a typed functional language PL using a - notation [3] as result. It is also possible to specify traditional control structures in the language using functions of higher order type.
4
2.4 Speci cation level Whereas the above speci es the internal language for structural synthesis, higher level languages are de ned for expressing problem speci cation. One of these, the NUT-language [26] contains six prede ned classes: bool, numeric, text, array, program, any An example which will be used later in the article is the de nition of a logical inverter, and-port and nand-port. This example illustrates the basic constructs of the language.
ELEMENT
var Delay:numeric; An element is an object with a numeric attribute
INVERTER
super ELEMENT; vir InINV, OutINV: bool; rel inv: InINV -> OutINV fOutINV := not InINVg
Inverter is a subclass of element, having two virtual boolean primitive type speci ers InINV and OutINV. Virtual components can be used in computations, but their values are not contained in the value of the object. The relation inv from InINV to OutINV is speci ed in the curly brackets.
AND
super ELEMENT; vir InAND1, InAND2, OutAND: bool; rel and: InAND1,InAND2->OutAND fOutAND:=InAND1 and InAND2g
NAND
super INVERTER; super AND; vir InNAND1, InNAND2, OutNAND: bool; rel InAND1 = InNAND1; InAND2 = InNAND2; OutNAND = OutINV; InINV = OutAND; nand: InNAND1, InNAND2 -> OutNANDfspeci cation g
The speci cation in the last relation indicates that a function for the nand-relation has not yet been developed. We will return to this later in the paper. In addition to this, production rules constitute a meta level for manipulation of classes. The production system is not part of the program synthesizer, but it can be invoked during synthesis. Production rules are of the form: 5
A1; : : :; An ! A0 ; N 0 where Ai i= 0,: : : , n are atomic formulae consisting of a predicate symbol and a list of
parameters. These meta speci cations are used for managing evolution of applications and expressing general constraints on integrity and consistency of an application.
3 Introduction to Partial Deduction (PD) The following overview of partial deduction is primarily based on [12, 13]. The reader is assumed to have a basic knowledge of logic programming [22]. Partial deduction is the principle of partial evaluation [10] used in the realm of logic programming. In the large, the goal of partial evaluation is to construct, when given a program and some form of restriction on its usage such as the knowledge of some, but not all of its input parameter values, a better new or \residual" program that is equivalent to the original program when used according to the restrictions. The improvement has often been in eectiveness, even if this is not the only possible kind of improvement. Partial evaluation is thus a form of program transformation, with emphasis on purely automatic methods. [4]. The transformations are correctness-preserving wrt (with respect to) the restrictions, in contrast to certain transformation systems used within requirements engineering, where an incomplete speci cation is transformed into a more complete speci cation by adding additional meaning to it [9]. Thus before applying partial evaluation, a correct speci cation has somehow been developed using other techniques. Partial deduction can be informally described as follows: Let A; B1 ; B2; C be rst order atomic formula and a substitution. Consider two clauses A B1 and B2 C . Let = mgu(B1 ; B2 ) (thus B1 = B2 ). Then partial deduction of A B1 and B2 C is a residual clause (A C ) A standard technique for de ning partial deduction is to base it on the proof-theoretic semantics using SLD-resolution [15].
De nition 1 (SLD-derivation and tree) Let P be a normal program and G a normal goal. An SLD-derivation (respectively, SLD-tree) of P [ fGg is an SLDNF-derivation (SLDNF-tree) of P [ fGg such that only positive literals are selected. De nition 2 (Resultant) A resultant is a formula A ; : : :; Am B ; : : :; Bn where m > 0; n 0 and A ; : : :; Am and B ; : : :; Bn are atoms. De nition 3 (SLD-derivation of a resultant) Let C be a xed computation rule. An SLD-derivation of a resultant R is a nite or in nite sequence of resultants: R )C R )C R )C : : : where for each i 1
1
1
1
0
1
0
2
Ri is of the form
A1; : : :; Am
B1 ; : : :; Bi?1 ; Bi; Bi+1 ; : : :; Bn
and Ri+1 (if any) is of the form: (A1; : : :; Am B1 ; : : :; Bi?1 ; C1; : : :; Cj ; Bi+1 ; : : :; Bn )
6
if 1. the C-selected atom in Ri is Bi 2. there is a standardized apart program clause H
C1; : : :; Cj
3. = mgu(Bi ,H ) if it exists.
De nition 4 (Partial SLD-trees) The (partial) SLD-tree of a resultant R under C is 0
a rooted tree such that:
1. its root is R0, and 2. each node Ri is either a leaf or its children are fRi+1 j Ri )C Ri + 1g.
De nition 5 (Partial deduction of an atom) A partial deduction of an atom A in a program P is the set of all non-failing leaves of an SLD-tree of A selected atom which does not unify with any clause of the program.
A. A failing leaf is a
The assumption that A is an atom can be relaxed to allow a nite set of atoms, A = fA1; : : :; Am g, and a partial deduction of A in P is the union of partial deductions of fA1 ; : : :; Am g in P . These partial deductions are called residual predicates.
De nition 6 (Partial deduction of a program) A partial deduction of a program P wrt A is a program P 0 (also called a residual program) obtained from P by replacing the set of clauses in P whose heads contain one of the predicate symbols appearing in A with a partial deduction of A in P. A condition restricting arbitrary partial deductions to those which intuitively are \specialisations" has to be imposed to guarantee completeness and, for the case of normal programs, even soundness of partial deduction.
De nition 7 (Closedness) Let S be a set of rst order formulae and A a nite set of atoms. S is called A-closed if each atom in S containing a predicate symbol occurring in an atom in A is an instance of an atom in A. Informally, a residual program for a program P wrt goal G can be obtained by:
The derivation tree for the goal G is constructed. The derivation is stopped when either { the goal succeeds. { the goal fails. { derivation is deliberately stopped by the user. The derivation can be stopped at any non-empty goal. The decision when to stop the derivation process is one of the main problems in partial deduction. 7
The resultant is constructed from each succeeding or non-failed derivation by that the
head of the resultant is the substituted goal G and the body is the unresolved goal at the end of these derivations.
Intuitively we can say that the residual program P' for a program P wrt a goal G is a partial proof of the goal; In order to prove the goal, what remains to be proved are the bodies of the resultants For the procedural semantics partial deduction is correct and complete. These concepts are de ned as follows. Let P be a de nite program, G a de nite atom, and P 0 a partial deduction of P wrt G.
De nition 8 (Correctness) P [ fGg has an SLD-refutation with computed answer , if
P 0 [ fGg does.
Completeness is the converse:
De nition 9 (Completeness) P 0 [ fGg has an SLD-refutation with computed answer , if P [ fGg does. The partial deduction theorem is formulated using the closedness condition.
Theorem 1 Let P be a program, G a goal, A a nite set of atoms, and P 0 a partial deduction of P wrt A (using SLD-trees). i) P 0 is correct wrt P and G ii) If P 0 [ fGg is A-closed, then P 0 is complete wrt P and G
Comment A-closedness of P 0 [ fGg means that a derivation of G from P 0 is A-closed.
3.1 Partial Deduction tactics Partial deduction is a generalization of the unfold transformation as de ned by Burstall and Darlington [6]. Unfolding can informally be explained as follows. Consider the following grammatical rules: 1. S ::= A L D 2. A ::= U N F O 3. F ::= U N F Unfolding rule 2 in the right hand side of rule 1 results in 4. S ::= U N F O L D. Thus A in rule 1 is replaced by U N F O using rule 2. Controlling the depth of unfolding is one of the main problems of partial deduction. To address this, Komorowski de nes the opening-tactic, a conservative approach to unfolding. 8
De nition 10 (Opening of predicates) Let A be a nite, independent set of atoms, G a de nite goal, and P a de nite program where P = P [ S for some disjoint P and S , 1
1
where S is a set of de nitions of some predicate symbols. We call P1 a source, and S an axiomatization. Let P 0 be a partial deduction of P wrt A such that P 0 [ fGg is A-closed, where the predicates of the atoms selected by the computation rule are the predicates of the heads of the clauses in S . Such a partial deduction is called opening of predicates S in P wrt A.
Informally, the purpose of opening is to identify where partial deduction steps should take place. This is in contrast to other approaches to partial deduction that advocate processing of entire programs rather than their fragments. Partial deduction as de ned does not include any folding-like transformations. Folding the above result (4) with rule 3 would produce 5. S ::= F O L D. The abbreviation tactic de ned by Komorowski is a generalization of folding adding uni cationbased instantiation.
De nition 11 (Abbreviating) Let P be a program, A a nite, independent set of atoms, H Lk a (standardized apart) clause of P , and A L0k ; Bn a resultant in an A-closed SLDNF-derivation of P [ f Ag. Finally, let be a substitution such that L0k = Lk , with H not uni able with any other head of a clause of P . If opening H Lk in A H; Bn produces a variant of A L0k ; Bn then A H; Bn is the result of abbreviating L0k with H and is called abbreviated resultant. Partial deduction can be correctly extended with abbreviating steps in derivations due to the following corollary.
Corollary Abbreviated resultant is entailed by P if and only if the corresponding non-
abbreviated resultant does, that is, resultant A resultant A H; Bn does.
L0k ; Bn is entailed by P if and only if
3.2 Stopping criteria In PAL, a system to support partial deduction of logic programs, four dierent stopping criteria for the opening-tactic is de ned [2] .
Stepwise, in which the user will be queried before each opening whether he wants to open the current predicate or not. User-de ned, in which the user via a $continue -construct is able to specify conditions for the opening of a speci c predicate. The rst argument to the $continue-predicate is the predicate for which the condition should hold. The second argument is a counter for the number of times the predicate should be opened. 9
Goal-based in which the decision of whether the current predicate is to be opened or
not is based on the previously opened predicates: If the current goal is an instance of a previously opened goal, the opening stops. Resultant-based opening in which the decision to open the current predicate is based upon the resulting resultant compared to the previous one. If the resulting resultant is an instance of a previously encountered resultant, further opening is prohibited. The two rst have generally showed to be the most useful. The two last have shown problems with in nite recursion.
3.3 Towards re nement calculus In an extension of PAL (PAL/ROP), additional operators for program re nement is de ned, such as prune, thin, add, thin, fatten and restrict. In addition folding and unfolding is de ned working similarly as the opening and abbreviating tactics [14], and the de nitions for these are not repeated here . A short, informal description of the other re nement operators are given below. See [14] for additional details.
De nition 12 (prune) Let P be a program and c a clause in P. prune(P,c) = P - f c g .
Pruning removes a redundant clause from a program. This can either be done if the clause is redundant, or if it cannot be used to derive an answer. The latter happens if the body of the clause cannot be proven.
De nition 13 (Add) Let P be a program and c a clause add(P,c) = P [ f c g .
This adds a new clause to the program
De nition 14 (Thin) Let c : A A ;i? ; Ai; Ai 1
thin(c; Ai) = A
1
;n
+1
A1;i?1 ; Ai+1;n
Thinning remove a super uous atom in the body of a clause.
De nition 15 (Fatten) Let c : A A ;n be a clause and B an atom. 1
fatten(c; B) = A
A1;i ; B; Ai+1;n
Fattening add a redundant atom in the body of a clause. This is the reverse of thinning. In addition, a restrict operator is de ned which allows to isolate a sub-program of a program. 10
4 Partial deduction in SSP It is possible to apply partial deduction on dierent levels in the framework consisting of SSP and NUT: 1. On a LL1 speci cation. 2. On the functional expressions 3. On the meta-speci cation We will in this paper concentrate on the rst of the above areas especially on the unconditional computability statements of LL1. We note in passing that much work in partial evaluation has been performed in functional programming, and approaches dealing with languages based on -calculus exist [1, 23]. Work has also been done on applying partial deduction principles on the meta-speci cation level of NUT [7, 20].
4.1 Partial deduction of LL1 speci cations Partial deduction of unconditional computability statements can be informally described as follows: ?! Let A,B and C be propositional variables and A ?! h! (b) C; C g (c) D be uncondif (a) B; B ? tional computability statements. Then possible partial deduction steps are the following: ???! A ???! h(f (a)) C , A ? g(???? h(f (a! ))) D , B g (h(b)) D . It is easy to notice that the rst partial deduction corresponds to forward chaining, the third one corresponds to backward chaining and the second one either forward or backward chaining. The main dierence between our de nition of PD with those considered in [12, 13] is the following: First, we consider logic speci cations instead of logic programs and, second, we consider intuitionistic (propositional) logic instead of classical ( rst order) logic.
De nition 16 (LL1 speci cation) Let U be a set of unconditional computability state! ments A f B and C be a set of conditional computability statements (A ! g B ) ! (C F??! g;c D) then P = U [ C is called a LL1 speci cation. (
)
De nition 17 (LL1 unconditional resultant) A LL1 unconditional resultant is an unconditional computability statement A?! f G where f is a term representing the function which
computes G from potentially composite functions over a1 ,: : : ,an
! The assumption that G is an atom can be relaxed by transformation of A ! f G into fA f G1 ; :::; A ! f Gm g and vice versa.
De nition 18 (Derivation of a LL1 unconditional resultant) Let C be a xed computation rule. A derivation of a LL1 unconditional resultant R is a nite sequence of LL1 resultants: R )C R )C R )C : : :; where, for each j , Rj is a unconditional computability 0
0
1
statement of the form
2
11
B1; :::; Bi?1; Bi; :::; Bi+m; Bi+m+1 f! (b) G and Rj +1 (if any) is of the form ( 1) or ( 2):
B1; :::; Bi?1; C; :::; Bi+m; Bi+m+1 f???????????????????! G (b1 ;:::;bi?1 ;h(c);:::;bi+m ;bi+m+1 )
(1)
if 1. the C-selected atom in Rj is Bi
2. there exists an unconditional computability statement C h! (c ) Bi 2 P called the matching computability statement. Bi is called the matching atom.
B1 ; :::; Bi?1; Bi; :::; Bi+m; Bi+m+1 ??????????????????????????! h(bi ;:::;bi+m ;f (b1;:::;bi?1 ;bi ;:::;bi+m ;bi+m+1 )) F
(2)
if 1. the C-selected atoms in Rj are Bi ; :::; Bi+m; G ! 2. there exists an unconditional computability statement Bi ; :::; Bi+m; G ? h??????? (bi ;:::;bi+m ;g ) F 2 P called the matching computability statement. Bi; :::; Bi+m; G are called the matching atoms. If Rj +1 do not exist, Rj is called a leaf resultant.
De nition 19 (Partial deduction of propositional goal) A partial deduction of a propositional goal A ! G in a speci cation P is the set of leaf resultants derived from G ! G (in this case A cannot be selected atoms) or from A ! (in this case G cannot be selected atom).
The assumption that G is an atom can be relaxed to allow a conjunction of atoms, A ! G, and a partial deduction of A ! G in P is the union of partial deductions of fA ! G1 ; : : :; A ! Gm g in P . These partial deductions are called residual unconditional computability statements.
De nition 20 (Partial deduction of a LL1 speci cation) A partial deduction of a LL1 speci cation P wrt G is a speci cation P 0 (also called a residual LL1 speci cation) ob-
tained from P by replacing computability statements having Gi in their heads or only variables from A in their bodies by corresponding partial deductions.
Partial deduction of P is correct and complete. These concepts are de ned as follows. Let P be a LL1 speci cation, A ! G a propositional goal, and P 0 a partial deduction of P wrt to A ! G. 12
De nition 21 (Correctness of partial deduction of LL1 speci cation) The function (or its computational equivalent) for the computation of A ! G is derivable (can be extracted from a proof of A ! G) from P if it is derivable from P 0 . Completeness is the converse:
De nition 22 (Completeness of partial deduction of LL1 speci cations) The function (or its computational equivalent) for the computation of A ! G is derivable (can be extracted from a proof of A ! G) from P 0 if it is derivable from P . Our de nitions of correctness and completeness require only derivation of function which implements the goal (all functions which satisfy this criteria are computational equivalents). The notions of correctness and completeness are de ned with respect to possibility to prove the goal and to extract a function from the proof. If we return to the semantics of LL1 speci cation then the corresponding calculus has been proven to be correct and complete with respect to the same criteria. Thus our proof of correctness and completeness is based on proving that derivation of LL1 unconditional resultant is a derivation (proof + program extraction) in a calculus corresponding to the LL1 language.
Lemma 1 Derivation of LL1 unconditional resultants of the form ( 1) can be done i it can be done by SSR1 inference rules.
Proof Our proof is based on consideration of the case, when Rj = B ; :::; Bi? ; Bi; :::; Bi m; Bi m f!b G 1
or in a short form Rj =
1
+
+
+1
( )
B1; B2 ; B3 f?????! (b1 ;b2 ;b3 ) G
and the matching computability statement is C (c) ? h! (c) B2 According to De nition 18 the LL1 unconditional resultant will be B1 (b1); C (c); B3 (b3) ??????! f (b1 ;h(c);b3 ) G Rj and the matching computability statement have the form of the following problem-oriented axioms in the calculus for LL1 language: ` B1(b1); B2(b2); B3(b3) f?????! (b1 ;b2 ;b3 ) G ` C (c) ?h! (c) B2 Applying the SSR1 inference rules we have the following derivation
C (c) ` C (c); B1 (b1) ` B1 (b1); B3 (b3) ` B3 (b3); C (c) ? h! (c) B2 ; C (c) ` B2(h(c)); ` B1 (b1); B2 (b2); B3(b3) f?????! (b1 ;b2 ;b3 ) G; B1(b1); C (c); B3(b3) ` G(f (b1; h(c); b3) 13
(! ?) (! ?)
(! +)
! ` B (b ); C (c); B (b ) b????????????? 1 cb3 f b1 ;h c ;b3 G 1
1
3
3
(
( )
)
When applying (b1; c; b3) on the expression, and using -reduction, we end up with: B1 (b1); C (c); B3 (b3) ??????! f (b1 ;h(c);b3 ) G which is equal to the LL1 unconditional resultant. The above derivation is derivation of the inference rule which corresponds to the PD step:
` B (b ); B (b ); B (b ) f?????! b1 ;b2 ;b3 G; ` C (c) ? h! c B ! ` B (b ); C (c); B (b ) b????????????? 1 cb3 f b1 ;h c ;b3 G In the case of empty B (b ) and B (b ) the rule is equivalent to the (! ?) rule and if the 1
1
2
1
1
2
3
3
1
3
1
3
(
)
2
( )
3
(
( )
)
3
unconditional computability statements are in the form of resultants and matching statements (see De nition 18) then derivation in the calculus corresponding to the LL1 language will be done in accordance to the derived inference rule. Q.E.D.
Lemma 2 Derivation of LL1 unconditional resultants of the form ( 2) can be done i it can be done by SSR1 inference rules.
Proof The proof of the Lemma 2 is analogous to the proof of the Lemma 1. The only
dierence is in the derivation of the inference rule for PD. We describe only this part of the proof and omit the common part. Rj and matching computability statements in calculus for LL1 language have the form of the following problem-oriented axioms: ` B1(b1); B2(b2); B3(b3) f?????! (b1 ;b2 ;b3 ) P ! ` B2(b2); P h???? (b2 ; p) F Applying the SSR1 inference rules we have the following derivation
B1(b1) ` B1 (b1); B2(b2) ` B2 (b2); B3 (b3) ` B3 (b3); B1 (b1); B2(b2); B3(b3) f?????! (b1 ;b2 ;b3 ) P (! ?) ? ?? ! B1 (b1); B2(b2); B3(b3) ` P (f (b1; b2; b3)); ` B2 (b2); P h(b2;p) F (! ?)
B1(b1); B2(b2); B3(b3) ` F (h(b2; f (b1; b2; b3)))
(! +)
! ` B (b ); B (b ); B (b ) ?b??????????????? 1 b2 b3 h b2 ;f b1;b2 ;b3 F 1
1
2
2
3
3
(
(
))
When applying (b1; b2; b3) on the expression, and using -reduction, we end up with: ! B1 (b1); B2(b2); B3 (b3) ? h???????? (b2 ;f (b1 ;b2 ;b3 )) F which is equal to the LL1 unconditional resultant. The above derivation is the derivation of the inference rule which corresponds to the PD step:
` B (b ); B (b ); B (b ) f?????! b1 ;b2 ;b3 P ; ` B (b ); P ? h?? b2 ! ;p F ? ???????? ! ` B (b ); B (b ); B (b ) h b2;f b1;b2;b3 F 1
1
2
2
1
3
1
3
2
(
2
2
)
3
14
3
(
(
2
))
(
)
Q.E.D.
Theorem 2 (Correctness and Completeness of PD in LL1 speci cation) Partial Deduction of a LL1 speci cation is correct and complete.
Proof Proof of the theorem immediately follows from Lemma 1 and Lemma 2 and correctness and completeness of the SSR1 inference rules.
Q.E.D. We do not consider conditional computability statements, however the general approach is the same. The main dierence in this case is the introduction of an order on LL1 speci cations and LL1 resultants. Each nested implication in the LL1 speci cation is a propositional goal and the derived set of resultants has to be ordered with respect to an order of propositional goals in LL1 speci cations. We focus here on tactics when using the ( 1) part of De nition 18.
4.2 Partial Deduction tactics The tactics that will be proposed, are all based on limiting the number of rules that are being used in the derivation and thus performing a limited set of derivations of resultants as de ned above. In addition we will describe dierent stopping criteria. We will use the example outlined in section 2 on logical ports to exemplify the use of the tactics and stopping criteria. The LL1-statements for the NAND-gate speci cation from section 2 are:
??(! 1. InINV (ii)? inv ii)OutINV
??????! 2. InAND1(ia1); InAND2(ia2)and (ia1;ia2)OutAND
??? ! 3. InAND1(ia1)? asg (ia1)InNAND 1
??? ! 4. InAND2(ia2)? asg (ia1)InNAND 2
???? ! 5. InNAND1(in1)asg (in1)InAND 1
???? ! 6. InNAND2(in2)asg (in2)InAND 2 7. OutNAND(on)???! asg(on)OutINV
??(! 8. OutINV (oi)? asg oi)OutNAND
???! 9. OutAND(oa)asg (oa) InINV
???(! 10. InINV (ii)asg ii)OutAND
There is admittingly a trivial example, but it will serve our needs for the moment. 15
4.2.1 Selection criteria We can imagine several possible selection criteria for which rules to use in the program derivation. 1. All unconditional computability statements are used. 2. Only a speci ed set of unconditional computability statements is used in the derivation. This is parallel to the opening tactics [13] and is de ned in a similar manner:
De nition 23 (Opening of propositional variables) Let G be a set of propositional goals, and P a LL1 speci cation where P = P [ S for some disjoint P and 1
1
S , where S is a set of de nitions of some propositional variables. We call P1 a source, and S an axiomatization. Let P 0 be a partial deduction of P wrt G , where the variables selected by the computation rule are the variables of the heads of the clauses in S . Such a partial deduction is called opening of S in P wrt G. 3. Only binary unconditional computability statements are used, i.e. statements of the form A ! B. This may be used to limit the breath of the derivation tree.
4.2.2 Stopping criteria The following three stopping criteria are suggested. They can be combined freely with all the selection criteria above. 1. Stepwise: The user is queried before each derivation which of the possible derivations he wants to perform. The call stepwise(goal) starts such a derivation. Example: stepwise(f OutNANDg ). There is one unconditional computability statement having OutNAND as a goal (8) and two potential derivations of the body of this clause (using 1 or 7). If the user chooses 1, the derivation using 1 and 8 gives the resultant: ?????? ! 11. InINV (ii)asg (inv (ii))OutNAND (1,8) He might then continue using this as the starting point for further derivations, or stop and start the normal synthesis process. 2. Restricted: Here, the goal is given together with an indicator of the maximum depth of the derivation tree. For this and for the next stopping criteria a branch in the derivation tree is pruned if an atom which has been used earlier in the derivation as a matching atom is reintroduced in the body of the unconditional computability statement. The pruning is to avoid in nite recursion. A restricted derivation is started with restricted(goal,depth). Example: restricted(f OutNANDg ,2)
?????? ! 11. InINV (ii)asg (inv (ii))OutNAND (1,8) 16
12. OutNAND(on)??????! asg(asg(on))OutNAND (7,8)* This branch is pruned, since we have the goal on the left-hand side. ?????????? ! 13. OutAND(oa)asg (inv (asg (oa)))OutNAND (9,11) 3. Assumption based: Stops when the body of the resultant contains only assumptions, or when no additional resultants are derivable. This, as the other stopping criteria, may use any of the selection criteria above. Using the third selection criteria (i.e only binary unconditional computability statements are used) we get the following for assumption(f OutNANDg,f InNAND1,InNAND2g ).
?????? ! 11. InINV (ii)asg (inv (ii))OutNAND (1,8) ?????????? ! 13. OutAND(oa)asg (inv (asg (oa)))OutNAND (9,11)
The derivation using (10,13) will be pruned according to the above rule, since this would have reintroduced InINV which have already been used as a matching atom. The result is in this case equal to the previous partial deduction. The statements f 2,5,6,13 g could then have been used in a synthesis process to extract the resulting function OutNAND = asg(inv(asg(and(asg(in1),asg(in2))))). On the other hand, if we used the rst selection criteria including all statements, we would further get: ???????????????? ! 14. InAND1(ia1); InAND2(ia2)asg (inv (asg (and(ia1;ia2))))OutNAND (2,13) ???????????????????! 15. InNAND1(in1); InAND2(ia2)asg (inv (asg (and(asg (in1);ia2))))OutNAND (5,14) 16. InNAND1(in1); InNAND2(in2)??????????????????????! asg(inv(asg(and(asg(in1);asg(in2)))))OutNAND (6,15) 16 contains as we see the complete formula, thus in this special case, partial deduction is in fact a total deduction. The derivation of 15 and 16 could in practice be performed in parallel, but this is not presently included.
4.2.3 Algorithms for partial deduction in SSP The following algorithm is used in the case of using the third stopping criteria and the rst selection rule. How the other combinations will be handled is brie y sketched afterwards. Call: assumption(G ,A,1)1 G : A set of goals A: A set of assumptions U : The set of unconditional computability statements 2 P US : The et of unconditional computability statements selected by the selection rule. Using the above selection rule, US = U C : The set of conditional computability statements 2 P P = U [C 1 The last parameter indicate the rst selection criteria.
17
M : A set of matching atoms. S : The set of old resultants. R : The set of new resultants. 1. M = G S = fu 2 US j goal(u) 2 Gg 2. In depth rst order: U1 : (A) ! B 2 US , U2 :(C) ! D 2 S if goal(U1) 62 M then R is the set of resultant produced by the derivation using any U1 2 US and U2 2 S not violating the above condition and making the set of variables in the body of the resultant distinct. If R = ; then goto 5. The conditions is to avoid in nite recursion. 3. For each new node in the derivation tree corresponding to all r 2 R we have US = ((US - f U1 g) - f U2g ) [ f rg M = M [ f goal(U1) g S=frg 4. if 8 a (a 2 body(r) wedge a 2 A) then solution = true, goto 6 else goto 2. 5. Deduction stopped, solution = false 6. P' = (P - f u 2 US j goal(u) 2 G g) [ RL. RL is the set of leaf resultants when solution = false, else RL = frg. The other selection criteria would restrict the set US . When using the restricted stopping criteria, step 4 would contain a check of the current depth of node in the derivation tree. G A US C P
= = = = =
{OutNAND} {InNAND1,InNAND2} U = {1,2,3,4,5,6,7,8,9,10} {} {1,2,3,4,5,6,7,8,9,10}
1. M = {OutNAND} S = {8} 2. Possible derivation {(U1 = 1, U2 = 8)} R = {11} 3. New node US = {2,3,4,5,6,7,9,10,11}
18
M = {OutNAND,OutINV} S = {11} 4. Not finished, goto 2 2. Possible derivation {(U1 = 9, U2 = 11)} R = {13} 3. New node US = {2,3,4,5,6,7,10,13} M = {OutNAND,OutINV,InINV} S = {13} 4. Not finished, goto 2 2. Possible derivation {(U1 = 2, U2 = 13)} R = {14} 3. New node US = {3,4,5,6,7,10,14} M = {OutNAND,OutINV,InINV,OutAND} S = {14} 4. Not finished, goto 2 2. Possible derivation {(U1 = 5, U2 = 14)} R = {15} 3. New node US = {3,4,6,7,10,15} M = {OutNAND,OutINV,InINV,OutAND,InAND1} S = {15} 4. Not finished, goto 2 2. Possible derivation {(U1 = 6, U2 = 15)} R = {16} 3. New node US = {3,4,5,7,10,16} M = {OutNAND,OutINV,InINV,OutAND,InAND1,InAND2} S = {16} 4. Finished, goto 6 6. P' = {1,2,3,4,5,6,7,9,10,16}
19
4.3 Additional re nement operators Just as additional re nement operators has been de ned in connection with partial deduction in logic programming, we will de ne similar re nement operators below.
De nition 24 (LL1 prune) Let P be a speci cation and s an unconditional computability statement in P.
prune(P,s) = P - f s g .
Pruning removes a redundant unconditional computability statement from the speci cation. This can either be done if the statement is redundant, or if it cannot be used to derive a goal. The latter happens if the head of the statement is not the goal, and it is not part of the body of another statement or that it is has an atom in the body which one can not assume knowing the value of and which is not the head of another statement. In connection with the nand-gate example, the following statements could have been pruned if present since it can not be used in the derivation of the goal as the problem is currently de ned:
InOR1(io1); InOR2(io2) or io!;io OutOR (
1
2)
De nition 25 (LL1 Add) Let P be a speci cation and s an unconditional computability
statement
add(P,s) = P [ f s g .
This adds a new statement to the speci cation. This can be a way of restricting the possible solution space by adding another assumption. This is parallel to weakening as speci ed by Wadler [27].
De nition 26 (LL1 Thin) Let s : A ;i? ; Ai; Ai 1
1
;n ! A be a statement.
+1
thin(s; Ai ) = A1;i?1 ; Ai+1;n ! A Thinning remove a super uous atom in the body of a statement, thus an atom which is not needed for the extraction of a function. This can be used when we for instance have the following situation: 1. X (x) f! (x) Y
2. X (x); Y (y ) g(! x;y) Z Performing derivation wrt Z gives the following resultant: 3. X (x); X (x) g(x;f!(x)) Z Thinning gives: 4. X (x) g(x;f!(x)) Z 20
The above is parallel to contraction as speci ed by Wadler [27]. It is incorporated in the above algorithm.
De nition 27 (LL1 Fatten) Let s : A ;n ! A be a statement and B an atom. fatten(s; B) = A ;i; B; Ai ;n ! A 1
1
+1
Fattening add a redundant atom in the body of a statement. This is the reverse of thinning. In addition might a restrict operator be de ned which allows to isolate a sub-speci cation of P.
5 Concluding remarks We have in this paper tried to transfer the notion of partial deduction to the framework of structural synthesis of program. Our main results are a de nition of partial deduction in this framework, proving completeness and correctness of PD in the framework and developing a set of tactics for utilising this together with a set of stopping criteria. An algorithm avoiding in nite looping in the derivation is proposed. The use of the partial deduction technique is in this case basically motivated for debugging reasons. In case of non-solvable problem (a propositional goal can't be derived) PD of a LL1 speci cation contains a set of all possible derivations and could be a good source for reasoning about possible errors in the speci cation [18]. We have in the paper concentrated on partial deduction of unconditional computability statements and only pointed out a possibility to extend the approach to the case of conditional computability statements. This problem should be investigated in the future.
References [1] M. Z. Ariola and Arvind. A syntactic approach to program transformation. In Proceedings of the symposium on Partial Evaluation and Semantics-Based Program Manipulation (PEPM'91), pages 116{129. ACM press, 1991. [2] P. Ballandby. Towards automating software design in the partial deduction framework. Master's thesis, IDT, NTH, Trondheim, Norway, December 1992. [3] H. P. Barendregt. The Lambda Calculus - Its Syntax and Semantics. North-Holland, 2 edition, 1984. [4] D. Bjrner, A. P. Ershov, and N. D. Jones, editors. Partial Evaluation and Mixed Computation, Gammel Averns, October 18{24 1987. North-Holland. [5] F. P. Brooks Jr. No silver bullet. Essence and accidents of software engineering. In H. J. Kugler, editor, Information Processing '86, pages 1069{1076. North-Holland, 1986. [6] R. M. Burstall and J. Darlington. Some transformations for developing recursive programs. Journal of the ACM, 24(1):44{67, January 1977. 21
[7] H-M. Haav and M. Matskin. Using partial deduction for automatic propagation of changes in OODB. In H. Kangassalo et al, editor, Information Modelling and Knowledge Bases IV, pages 339{352. IOS Press, 1993. [8] B. Johnson. Is the silver bullet knowledge-based? IEEE Software, page 10, January 1993. [9] W. L. Johnson and M. S. Feather. Using evolution transformations to construct speci cations. In Lowry and McCartney [17], pages 65{91. [10] N. D. Jones, C. K. Gomard, and P. Sesoft. Partial Evaluation and Automatic Program Generation. Prentice Hall, Englewood Clis, NJ, 1993. [11] J. Komorowski. A Speci cation of An Abstract Prolog Machine and Its Application to Partial Evaluation. PhD thesis, Department of Computer abd Information Science, Linkoping University, Linkoping, Sweden, 1981. [12] J. Komorowski. Synthesis of programs in the partial deduction framework. In Lowry and McCartney [17], pages 377{404. [13] J. Komorowski. A prolegomenon to partial deduction. Fundamenta Informaticae, 18(1):41{64, January 1993. [14] J. Komorowski and S. Trcek. Towards Re nement of De nite Logic Programs. Submitted to ISMIS'94, 1994. [15] J. W. Lloyd and J. C. Shepherdson. Partial evaluation in logic programming. Technical Report CS-87-09 (revised 1989), University of Bristol, England, July 1989. [16] M. R. Lowry. Software engineering in the twenty- rst century. In Lowry and McCartney [17], pages 627{654. [17] M. R. Lowry and R. D. McCartney, editors. Automating Software Design, California, USA, 1991. The MIT Press. [18] M. Matskin. Debugging in programming systems with structural synthesis of programs. Software, 4:21{26, 1983. (in russian). [19] M. Matskin. Program synthesis. Synopsis of Lecture Notes, IDT, NTH, 1992. [20] M. Matskin and J. Komorowski. Partial deduction and manipulation of classes and objects in an object-oriented environment. In Proceedings of the First Compulog-Network Workshop on Programming Languages in Computational Logic, Pisa, Italy, April 6-7 1992. [21] G. Mints and E. Tyugu. Justi cation of structural synthesis of programs. Science of Computer Programming, 2(3):215{240, 1982. [22] U. Nilsson and J. Maluszynski. Logic, Programming and Prolog. Wiley, 1990. [23] D. A. Schmidt. Static properties of partial evaluation. In Bjrner et al. [4], pages 465{483. 22
[24] E. Tyugu. The structural synthesis of programs. In Algorithms in Modern Mathematics and Computer Science, number 122 in Lecture Notes in Computer Science, pages 261{ 289, Berlin, 1981. Springer-Verlag. [25] E. Tyugu. Knowledge-Based Programming. Turing Institute press, 1988. [26] E Tyugu. Three new-generation software environments. Communications of the ACM, 34(6):46{59, June 1991. [27] P. Wadler. Is there a use for linear logic? In Proceedings of the symposium on Partial Evaluation and Semantics-Based Program Manipulation (PEPM'91), pages 255{273. ACM press, 1991.
23