Towards a Logic Programming Methodology based on Higher-order Predicates 1 Andreas Hamfelt Computing Science Department Uppsala University
Jrgen Fischer Nilsson Department of Computer Science Technical University of Denmark
Abstract:
This paper outlines a logic programming methodology which applies standardized logic program recursion forms aorded by a system of general purpose recursion schemes. The recursion schemes are conceived of as quasi higher-order predicates which accept predicate arguments, thereby representing parameterized program modules. This use of higher-order predicates is analogous to higher-order functionals in functional programming. However, these quasi higher-order predicates are handled by a metalogic programming technique within ordinary logic programming. Some of the proposed recursion operators are actualizations of mathematical induction principles (e.g. structural induction as generalization of primitive recursion). Others are heuristic schemes for commonly occurring recursive program forms. The intention is to handle all recursions in logic programs through the given repertoire of higher-order predicates. We carry out a pragmatic feasibility study of the proposed recursion operators with respect to the corpus of common textbook logic programs. This pragmatic investigation is accompanied with an analysis of the theoretical expressivity. The main theoretical results concerning computability are (1) Primitive recursive functions can be re-expressed in logic programming by predicates de ned solely by non-recursive clauses augmented with a fold recursion predicate akin to the fold operators in functional programming. (2) General recursive functions can be re-expressed likewise since fold allows re-expression of a linrec recursion predicate facilitating linear, unbounded recursion. Keywords: Higher-order and metalogic programming; recursion schemes; composition, parameterization, and modularization of logic programs. Information about authors: Andreas Hamfelt: Mail address: Computing Science Department, Uppsala University, Box 311, S-751 05 Uppsala, Sweden. Fax: +46 18 52 12 70. Phone: +46 18 18 10 37. Email:
[email protected]. Jrgen Fischer Nilsson: Mail address: Department of Information Technology 344, DTU, DK2800 Lyngby, Denmark. Fax: + 45 42 88 45 30. Phone: + 45 45 25 37 30. Email:
[email protected]. 1
1
1 Introduction This paper is intended as an initial contribution to a methodology for structured logic program development. The paper proposes a system of standardized general purpose recursion schemes from which programs are actualized by specialization. In this way the program is imposed a discipline concerning choice of recursion structures. Similar recursion structures are proposed in a related but independent work for pure higher-order logic programming in prolog [1]. Other proposals for recursion templates are found in [2, 3, 21, 22]. Use of higher-order functions for composition and structuring of programs is a wellestablished part of functional programming methodology, cf. e.g., [5, 6]. In analogy to functional programming recursion schemes (recursion operators) are provided here as higher-order predicates de ned by appropriate clauses, from which concrete logic programs come about by instantiation of predicate parameters with predicate constants. All recursions are to be expressed by the higher-order predicates. This programming technique below is referred to as HOLP (higher-order logic programming). In the concluding section we mention some recent developments in functional programming methodology and outline how HOLP can serve as a basis for a logic programming methodology. Higher-order predicates (with the accompanying predicate argument variables) are not available in ordinary rst-order logic programming languages. However, the facilities needed can be obtained in the setting of ordinary logic programming through a well-known metalogic programming technique originating in [7] as explained in sect. 2, which leaves intact the overall program structure. This choice puts our approach in between pure higher-order and pure metalogic programming thus avoiding intricacies of both. A basic repertoire of recursion operators are introduced in sect. 3, conforming with a previous paper [8]. However in contrast to the quest for one universal recursion form in [8], the approach in the present paper is to conduct a \bottom-up" study of programs expressible within the HOLP technique starting from the repertoire of basic operators. Then, in sect. 4, simple programs for the data types of natural numbers and lists are considered, like in text book presentation of logic programs. More sophisticated programs such as parsers and (meta)interpreters are considered in sect. 8 and 9. The primary intention of this exploration is to establish the pragmatic adequacy of the recursion operators by formulating within HOLP a comprehensive corpus of common logic programs, using mainly [9] as source and reference. This study in turn is to provide the foundation for a HOLP logic programming methodology encouraging and supporting reuse of program modules. This is achieved by applying the recursion operators as combining forms for composing programs in the spirit of structured programming and as practised as a matter of routine in functional programming. In sect. 5, 6 and 7 the theoretical adequacy of the operators, i.e., their capacity 2
for capturing computationally expressive classes of programs, is investigated by relating to the theory of primitive recursive and general recursive functions and subsequently Turing machine computability. The investigation materializes into two theorems characterizing theoretical expressivity of the operators.
2 Higher-Order Logic Programming Predicates Logically a higher-order predicate is a predicate accepting a predicate (i.e., a term denoting a relation) as argument, in contrast to rst-order predicates, which accept only individual terms as arguments. In the higher-order framework of logical type theory predicate terms are formed as predicate constants, predicate variables, and appropriate compound predicate expressions. These expressions may include the (typed) -calculus with abstraction as a generalisation of rst-order terms, as in Church's re-formulation of The Simple Theory of Types, cf. e.g., [10]. In this context, besides predicate constants and variables, we include some restricted expression forms, but we dispense with abstractions. We distinguish two types of terms: { ordinary logic programming terms (i.e., individual terms including list terms), { predicate terms. A HOLP predicate term is either { a predicate variable, { a predicate constant, or { a compound predicate term of the form p(t1 ; : : : ; tm ), where p is a predicate constant and the ti are terms (individual or predicate terms). The compound predicate terms provide the Currying mechanism, so that all of the arguments to a predicate need not be supplied at once. This yields the following syntactical extension for usual atomic formulas in clauses:
Pterm(t1; : : : ; tn) where Pterm is a predicate term, and arguments ti are either ordinary individual terms or predicate terms 2. The number of argument terms to a predicate term has to conform with the arity of the predicate term. Some predicate symbols may be generic or overloaded. Below is explained how this extension is coped with straightforwardly in ordinary logic programming. It should be observed that (in contrast to prolog [11] and HiLog [12]) function variables in terms are not admitted in individual terms, so our individual terms are as in prolog. Absence of function variables yields a purely relational higher-order language, implying that ordinary rst-order uni cation suces. 2 Since we are tacitly adopting the framework of logical type theory a type strati cation is assumed for the predicate symbols, which preclude self-application, e.g., ruling out p(p).
3
Let us stress that we are concerned with methodology rather than with new language proposals: The above logic programming language extensions can be viewed as notational conveniences according to the reduction described in sect. 2.2.
2.1 Composing Programs by Higher-order Predicates Consider a higher-order predicate q with m predicate arguments followed by n ordinary (i.e, individual) arguments and de ned logic program-wise through appropriate clauses. The predicate q may be conceived as a parameterized program module accepting m parameter program names (i.e. predicates) p1; : : : ; pm, yielding a rst-order predicate q(p1 ; : : : ; pm ) to be invoked as in the atomic formula
q(p1; : : : ; pm)(t1; : : : ; tn) The parameter programs may themselves be parameterized. The predicate terms p1; : : : ; pm are assumed to be ground upon invocation. Below we assume a leftmost rst computation rule in the procedural understanding of program clauses. See however, sect. 10 for obtaining a declarative understanding of the programs. This extension of rst-order clauses, which neglects concern to higher-order axioms, should be paralleled with the availability of higher-order functions in functional programming.
2.2 Basic Higher-order Functionality through Metalogic Using the method in [7] the above higher-order atom Pterm(t1 ; : : : ; tn) is handled in e.g., prolog as the rst-order atom a(Pterm; t1 ; : : : ; tn )
where a (for apply) is a distinguished predicate expressing predication, that is to say, application of a predicate to its arguments. The application predicate a is de ned \pointwise" by the clauses of the form a(p; X1 ; : : : ; Xn) p(X1 ; : : : ; Xn ) for each predicate constant p in the program, and further by a(p(Y1 ; : : : ; Ym ); X1 ; : : : ; Xn ) p(Y1 ; : : : ; Ym ; X1 ; : : : ; Xn ) for the relevant predicate terms. As a trivial example consider a higher-order predicate and, which might be introduced with and(P; Q)(X1; : : : ; Xn) P (X1; : : : ; Xn); Q(X1; : : : ; Xn) for relevant index values of n. In ordinary logic programming this is written as a(and(P; Q); X1 ; : : : ; Xn ) a(P;X1 ; : : : ; Xn); a(Q; X1 ; : : : ; Xn) 4
Observe the convention adopted in this paper of using sans serif font for the ordinary logic programming form (as in prolog) and italics for the sugared HOLP form. This example operator illustrates composition of two programs from simpler predicate de nitions by means of higher-order predicates in the manner of structural programming. The predicate a expressing predication may be viewed as a variant of a metainterpreter demo predicate. Indeed the de ning clauses for a are akin to clauses of a metainterpreter, cf. the above clauses for and. However, in the predication predicate the predicate part and its arguments are kept as separate arguments, whereas they are combined to an atomic formula in the demo predicate. These observations indicate that our restricted higher-order approach can be viewed as, and indeed reduced to, a variant of metalogic programming, expressible as ordinary logic programs. The present quasi higher-order perspective, however, oers certain advantages compared with metalogic programming in view of the present aims: Firstly, the type discipline of higher-order logic maintains a distinction between predicate terms and individuals terms which precludes the confusion of variables at dierent levels being latent in metalogic programming. Secondly, the higher-order logic is conceptually cleaner, since it explains composition and parameterization of program modules without resorting to encoding of programs as terms cluttered with calls to a metainterpreter. Thirdly, the higher-order logic framework avoids most of the computational overhead inherent in the general comprehensive metainterpreter approach.
3 Basic Higher-order Recursion Predicates We introduce in this section two quasi higher-order predicates (recursion operators) which are inspired by the following two functionals foldright and foldleft from higher-order functional programming [5, 6, 13, 14]: foldr f y nil = y foldr f y x.xs = f x (foldr f y xs) and foldl f y nil = y foldl f y x.xs = foldl f (f x y) xs The relational counterparts we formulate as foldr (P; Q)(Y; L; Z ) Q(Y; L; Z ) foldr (P; Q)(Y; X:T; W ) foldr (P; Q)(Y; T; Z ) ^ P (X; Z; W ): foldl (P; Q)(Y; L; Z ) Q(Y; L; Z ) foldl (P; Q)(Y; X:T; W ) P (X; Y; Z ) ^ foldl (P; Q)(Z; T; W ): Here the 3-ary predicate argument P is the relational counterpart and general5
ization of the function argument f . With non-functional argument predicates the higher-order relational recursion schemes de ne non-functional, i.e., relational predicates. A direct relational translation of the functional fold schemes yields the in exible base case foldr =l (P )(Y; nil ; Y ). However, it is convenient to generalize the relational fold schemes so as to allow the base case to be computed by a certain predicate argument Q, cf. e.g., [15]. Furthermore, partial traversal of the input list is facilitated if the base case is not con ned to the empty list. As demonstrated in the next section these operators suce for expressing most commonly occurring logic programs dealing with natural numbers and lists in a prima facie non-recursive form without extensive rewriting. Indeed as shown in sect. 5 and 6 the class includes the primitive recursive as well as the general recursive functions. Further useful operators are exists and all exists (P )(nil ) fail : exists (P )(X:T ) P (X ): exists (P )(X:T ) exists (P )(T ): all (P )(nil ): all (P )(X:T ) P (X ); all (P )(T ): but both these can easily be managed within the fold schemes. Foldright expresses a structural induction principle on lists; the unfolded form of the foldr operator in the case of [X1; : : : ; Xn] with n > 0 and 1 i n is: foldr (P; Q)(Y; [X1; : : : ; Xn]; W ) Q(Y; [Xi; : : : ; Xn]; Zi); P (Xi?1; Zi; Zi?1); P (Xi?2; Zi?1; Zi?2); : : : ; P (X1; Z2; W ): Q(Y; [Xi ; : : : ; Xn]; Zi): and correspondingly for the foldl operator foldl (P; Q)(Y; [X1 ; : : : ; Xn]; W ) P (X1; Y; Z1); P (X2; Z1; Z2); : : : ; P (Xi?1; Zi?2; Zi?1); Q(Zi?1; [Xi; : : : ; Xn]; W ): Q(Y; [Xi ; : : : ; Xn]; W ): The argument predicate P in the fold schemes takes as argument the head only of the list. Some programs require the information represented by the whole input list at each recursion step. This is e.g., the case for the primitive recursion scheme presented below and natural number processing programs such as factorial. To deal with these programs we derive as a specialization the recursion scheme natural number recursion where the natural number n is represented as a list of length n: natrec (P; Q)(nil ; Y ) Q(Y ): natrec (P; Q)(X:T; W ) natrec (P; Q)(T; V ); P (X:T; V; W ): Theorem: Natural number recursion can be de ned in terms of foldr by exploiting an auxiliary argument carrying the reconstructed list through the calls to P , viz, 6
natrec((P,Q),X,Y) foldr((p(P),q(Q)), ,X,(Y, )). a(p(P),X,(V,T),(W,[XjT])) a(P,[XjT],V,W). a(q(Q), ,[],(V,[])) a(Q,V).
Proof: By induction on the length of the input list, for the full proof see [15].
Let us stress again that these recursion operators can be understood as ordinary logic programs by simple rewriting as explained in the preceding section and exempli ed just above. It is easy to show by structural induction that these higher-order predicates are bound to terminate given terminating predicate arguments and assuming a ground list argument. Section 6 introduces a not necessarily terminating specialization of foldl which omits the list argument. With non-functional argument predicates the higher-order relational recursion schemes de ne non-functional, i.e., relational, predicates. In contrast, higher-order functionals in functional programming insist on functions as arguments yielding functions as results.
3.1 Recursively De ned Predicates Let ) be the immediate dependency binary relation between predicates in an ordinary logic program. That is to say, q ) p i there exists a clause, where the predicate p is in the head and q is in the body. Let )+ denote the transitive closure of ). Then a predicate p is recursively de ned in the program i p )+ p. It is a key point in our approach that the only recursive predicates admitted are those being present in the xed repertoire of higher-order predicates to be described. Hence those predicates with accompanying de ning clauses which are introduced by the programmer are supposed to be non-recursively de ned. Some of them serve as predicate arguments to the recursion operators.
4 Basic Logic Programs In this section are given solutions to a representative selection of pure logic programming examples concerning natural numbers and lists in the text book [9]. The purpose of this section is to show that the programs can be coped with using solely the above foldright and foldleft operators.
4.1 Natural Number Processing The HOLP approach favours replacement of general compound terms with list terms. Accordingly, let the natural number n be represented as a list nil.nil . . . nil with n + 1 occurrences of nil where : binds to the left and a.a means cons (a; a). 7
For the program notation, note that [[]] corresponds to nil.nil, so 1, 2, . . . appear as [[]], [[],[]], etc. Natural number can be de ned as nat (N ) all (isnil)(N ): isnil(nil). plus as plus (X; Y; Z ) foldr (cons ; id )(X; Y; Z ): cons (X; Y; X:Y ): id (X; nil ; X ): or as an ordinary rst-order logic program, cf. sect. 2.2 plus(X,Y,Z) foldr((cons,id),X,Y,Z). a(cons,X,Y,[XjY]). a(id,X,[],X).
Let us introduce an auxiliary curried variant of plus, called plus (X ) a(plus(X), ,Y,Z)
a(plus,X,Y,Z).
which adds a given number X to Y in order to yield Z . Now times can be de ned as
times(X,Y,Z) foldr((plus(X),null), ,Y,Z). a(null, ,[],[]). exponential analogously as exp(X,Y,Z) foldr((times(X),one), ,Y,Z). a(times(X), ,Y,Z) a(times,X,Y,Z). a(one, ,[],[[]]). factorial (p. 39 in [9]) using the natural number recursion scheme can be expressed
as
factorial(X,Y) a(isone,[[]]).
natrec((times,isone),X,Y).
The ordering relationship that X is greater than Y can be expressed with append or directly using foldl as
greater than(X,Y) foldl((tail,notnil),X,Y, ). a(tail, ,[FjT],T). a(notnil,[],[FjT], ). allowing maximum to be de ned as maximum(X,Y,X) greater than(X,Y). maximum(X,Y,Y) greater than(Y,X). Modulus with its common recursive de nition mod(X,Y,X) X0. can be expressed as gcd(Xs,Ys,Gcd) foldl((p,q),(Xs,Ys), ,Gcd). a(p, ,(Xs,Ys),(Ys,Zs)) a(mod,Xs,Ys,Zs). a(q,(Xs,[]), ,Xs) greaterthan(X,[]).
4.2 List Processing Logic Programs The program list can be expressed as
list(X) foldr((true,trueq), ,X, ). a(true, , , ). a(trueq, ,[], ). append identically with plus as append(X,Y,Z) foldr((cons,id),Y,X,Z). member as member(X,L) foldr((true, rst),X,L, ). a( rst,F,[FjR], ).
One observes that this program takes advantage of the generalized version of foldr where the bases case is not con ned to the empty list. This applies to the following predicate as well. Sux (p. 45 in [9]) can be expressed as sux(Xs,Ys) a(id1,L,L, ).
foldr((true,id1),Xs,Ys, ).
pre x (p. 45 in [9]) as pre x(Xs,Ys) foldr((cons,trueq), ,Xs,Ys). a(cons,X,Xs,[XjXs]). a(trueq, ,[], ). Non-nave reverse (p. 48 in [9]) can be expressed simply as reverse(U,V) foldl((cons,id),[],U,V). length (p. 49 in [9]) as length(X,Y) foldr((p,nil), ,X,Y).
9
a(p,F,Z,[[]jZ]). a(nil, ,[],[]).
delete (p. 53 in [9]) with currying as delete(Xs,Z,Ys) foldr((purge(Z),null), ,Xs,Ys). a(purge(Z),Z,Xs,Xs). a(purge(Z),F,Xs,[FjXs]) notequal(Z,F). a(null, ,[],[]). where the argument predicate purge leaves F if it is not equal to the Z to be removed. The predicate select can be de ned as select(X,Y,Z) split(Y,Y1,[XjY2]),append(Y1,Y2,Z). split(X,Y,Z) append(Y,Z,X). ordered (p. 55 in [9]) as ordered(X) foldr((p,q), ,X, ). a(p,F,[],[F]). a(p,F,[F1jR1],[F,F1jR1]) F F1. a(q,X,[],[]). insert as insert(E,Xs,Ys) split(Xs,X1s,X2s),append(X1s,[EjX2s],Ys). allowing permutation (p. 54 in [9]) to be expressed as perm(Xs,Ys) foldr((p,q), ,Xs,Ys). a(p,F,Z,W) insert(F,Z,W). a(q, ,[],[]). permutation sort as sort(Xs,Ys) perm(Xs,Ys),ordered(Ys). The zip of two lists with its ordinary de nition zip([],[],[]). zip([XjXs],[YjYs],[(X,Y)jR]) zip(Xs,Ys,R). can be expressed as zip(Xs,Ys,Zs) foldr((pair elements,reverse(Xs)), ,Ys,( ,Zs)). a(pair elements,Y,([XjT],L),(T,[(X,Y)jL])). a(reverse(Xs), ,[],(RevXs,[])) reverse(Xs,RevXs). The only programs above which exploit foldleft instead of foldright are non-nave reverse, greater than, greatest common divisor and modulus. As mentioned in the conclusion foldl can be de ned in terms of foldr and vice versa in the declarative conception of logic programs. These connections between foldl and foldr can be exploited in adapting the above programs to dierent data ow modes, cf. [15]. The insistence on the fold re-
10
cursion schemes pays o by simplifying the transformation among the declarative equivalent|and possibly procedurally diverging|forms of programs.
4.3 Parameterised Programs As a simple example demonstrating how HOLP facilitates parameterization (and hence reuse) of program modules let us consider non-deterministic nite state automatons (NDFA). This in addition exempli es a non-deterministic (i.e., nonfunctional) predicate argument, and is hence not just a variant of a functional program. An interpreter ndfa for NDFA given by a state transition table can be formulated in ordinary logic programming using explicit recursion as ndfa([],Q) nalstate(Q). ndfa([XjS],Q1) table(X,Q1,Q2), ndfa(S,Q2).
where the rst argument is the input sequence of symbols, and the second one is the current state. Using foldleft it can be generalized through parameterization by the reformulation ndfa (InitialState ; FinalState ; Table )(S ) InitialState (Q); foldl ((Table ; nal (FinalState ))(Q; S; ): nal (FinalState)(Q; nil ; ) FinalState(Q): The program argument predicates InitialState and FinalState specify initial and nal states, respectively. The predicate Table is a 3-argument predicate specifying the transition table by telling the next state(s) given the current state and the current symbol in string S . Using sect. 2.2 this can be rewritten to the ordinary logic program ndfa(InitialState,FinalState,Table,S) a(InitialState,Q),foldl((Table, nal(FinalState)),Q,S, ). a( nal(FinalState),Q,[], ) a(FinalState,Q).
This illustrates how the HOLP methodology supports abstraction and reuse of programs by rst introducing a general program solution, and then obtaining concrete programs by supplying appropriate predicate parameters.
5 Primitive Recursive Functions as Logic Programs Recursive function theory [16], see also e.g., [17], establishes the foundation for computation of functions of one or more arguments over natural numbers. Let us consider forms of logic programs for computing such functions. A computable function f of n arguments is naturally represented in logic programming as a predicate f 0 of n + 1 arguments, the last argument providing the result, viz.: 11
f (x1; : : : ; xn) = y corresponds to f 0(X1 ; : : : ; Xn; Y ) Again, let the natural numbers 0; 1; 2; : : : without loss of generality be represented in the logic programs by lists nil ; (nil :nil ); (nil : : : nil ), so that the natural number n is represented by a list of length n with n + 1 occurrences of nil.
5.1 Basic Recursive Functions 1. The successor function: succ(x) = x + 1. In logic programming this function can be speci ed and computed by the clause succ0(X; (nil:X )): 2. The constant function: zero (x) = 0. This function can be computed by the clause zero0 (X; nil): 3. The projection functions: proj n;i(x1 ; : : : ; xi; : : : ; xn) = xi . They are handled by clauses proj 0n;i(X1 ; : : : ; Xi ; : : : ; Xn; Xi): for i 2 f1 : : : ng for relevant nite number of arguments 1; 2; : : : ; n; : : :.
5.2 Primitive Recursive Functions The class of primitive recursive functions [16, 17] is established through the following principles of composition and primitive recursion using the basic recursive functions as building blocks.
Composition A primitive recursive function h is formed by composition as follows:
h(x1 ; : : : ; xn) = f (g1(x1 ; : : : ; xn); : : : ; gm(x1; : : : ; xn)) where f and the gi are to be already introduced as primitive recursive functions. The corresponding primitive recursive predicate h0 is de ned as follows h0(X1; : : : ; Xn; Y ) g10 (X1; : : : ; Xn; Y1); : : : ; gm0 (X1; : : : ; Xn; Ym); f 0(Y1; : : : ; Ym; Y ):
Primitive recursion scheme
The primitive recursion scheme de nes a function r in terms of two functions f and g assumed to be already established members of the class of primitive recursive functions: 12
(
r(0; x2; : : : ; xn) = f (x2 ; : : : ; xn) r(x1 + 1; x2; : : : ; xn) = g(x1 ; r(x1; : : : ; xn); x2; : : : ; xn) This yields in particular for n = 1: ( r(0) = f () r(x1 + 1) = g(x1; r(x1)) This primitive recursion scheme in logic programming with the chosen representation of numbers becomes primrec (nil; X2 ; : : : ; Xn; Y ) f 0(X2; : : : ; Xn; Y ): primrec ((nil:X1 ); X2; : : : ; Xn ; Z ) primrec (X1 ; X2 ; : : : ; Xn; Y ); g0(X1; Y; : : : ; Xn; Z ): This predicate can be achieved with the natrec operator from sect. 3 as follows primrec (X1 ; X2 ; : : : ; Xn; Y ) natrec (p(X2 ; : : : ; Xn); q(X2 ; : : : ; Xn))(X1 ; Y ): with the auxiliary predicate arguments p(X2; : : : ; Xn)((nil :U ); V; W ) g0(U; V; X2; : : : ; Xn; W ): q(X2; : : : ; Xn)(Y ) f 0(X2 ; : : : ; Xn; Y ): Recalling the notion of (non)recursively de ned predicates from sect. 2.2, this program formulation proves
Theorem 1: The class of primitive recursive functions can be ex-
pressed by non-recursively de ned predicates in logic programming augmented with the natrec recursion predicate and by implication the foldright recursion operator.
6 Partial and General Recursive Functions We now proceed from the class of primitive functions to the more general class of partial or total (general) recursive functions. The general recursive functions (often referred to just as recursive functions) are de ned in the theory of recursive functions as an extension of primitive functions by putting at disposal one more operator:
Minimalization scheme
h(x1 ; : : : ; xn ) = the least y such that f (y; x1; : : : ; xn) = 0 where f is a primitive recursive function. The function h de ned above in general is partial, since there need not be any (least) y for certain arguments. Hence the function is unde ned on 13
those arguments. The total (general) recursive functions are those recursive functions which are everywhere de ned on natural numbers for all their arguments. Partial and general recursive functions de ned by this scheme hence calls for corresponding logic programming de nitions which are not guaranteed to terminate. This suggests introducing a variant, linrec, of the foldl recursion scheme, which in contrast to the previous use of two fold schemes is not bound to terminate by structural recursion through a given list: linrec (P; Q)(X; Y ) foldl (p(P ); q(Q))(X; ; Y ): q(Q)(X; ; Y ) Q(X; Y ) p(P )(X; Y; Z ) P (X; Z ): Through unfold/fold transformations this specialization of fold can be written linrec (P; Q)(X; Y ) Q(X; Y ): linrec (P; Q)(X; Y ) P (X; Z ); linrec(P; Q)(Z; Y ): which evidences the elimination of the list argument on which foldl performs structural induction, hence admitting unbounded recursion. Now, the de ning recursion scheme for the minimalization operator may be re-phrased initially e.g. as follows in impure logic programming: h0 (X1 ; : : : ; Xn ; Y) greq(nil; Y); f 0 (Y;X1 ; : : : ; Xn ; nil); !: greq(X; X): greq(X; Z) succ(X; Y); greq(Y; Z): Appealing to the linrec unbounded recursion scheme the natural number ordering predicate may be re-expressed non-recursively as greq(X; Y) a(linrec(succ; proj11); X; Y): recalling the de ning clauses and
succ(X,(nil.X)). proj11(X,X).
The above can be reformulated to avoid use of cut (still ensuring termination in the case of total recursive functions) through h0 (X1 ; : : : ; Xn ; Y) h00 (X1 ; : : : ; Xn ; nil; Y): with h00 (X1 ; : : : ; Xn ; Y; Y) f 0 (Y;X1 ; : : : ; Xn ; nil): h00 (X1 ; : : : ; Xn ; Y; Z) f 0 (Y;X1 ; : : : ; Xn; V); pos(V); succ(Y; Y0 ); h00 (X1 ; : : : ; Xn ; Y0 ; Z): with the auxiliary pos(X) succ(Y; X): 14
These clauses for h00 in turn can be expressed non-recursively with linrec as follows: h00 (X1; : : : ; Xn)(Y; Z ) linrec (next (X1 ; : : : ; Xn); rst (X1 ; : : : ; Xn))(Y; Z ): rst (X1 ; : : : ; Xn)(X; X ) f 0(X1 ; : : : ; Xn)(X; nil): next (X1 ; : : : ; Xn )(X; Y ) f 0 (X1 ; : : : ; Xn )(X; V ); pos(V ); succ(X; Y ): Referring to Church-Turing's thesis, cf. e.g. [18], the HOLP reconstructions above prove the following theorem as a generalization of Theorem 1:
Theorem 2: All computable functions can be expressed in logic programs solely by non-recursive clausal predicate de nitions augmented with the linrec recursion operator and a fortiori the foldleft recursion operator.
7 Emulating a Universal Turing Machine As an alternative approach consolidating the above universality result concerning linrec we devise a de nite clausal logic program emulating the universal Turing machine, cf. [19], using the linrec operator as the sole recursion form. Proof: The construction may proceed as follows: A Turing machine is here described by 6-tuples (; h; s; 0; s0; ) where and 0 are current and new state, h is a tag indicating whether is a halt state (halt/continue), s and s0 are current and next symbol on the tape under the reading head, and is either of the tape directions left, none, right. The tape is represented by two lists covering the used left half and the used right half (the latter including the position under the reading head). These lists are extended according to needs during the execution. The Turing machine computation can be recursively de ned by a predicate compute, de ned as follows compute((Left; Right; S); M) haltstate((Left; Right; S); M): compute((Left; Right; S); M) update(M; (Left; Right; S); (NewLeft; NewRight; NewS)); compute((NewLeft; NewRight; NewS); M): where the second argument M is the Turing machine speci cation represented as a read-only list or predicate (cf. sect. 4.3) of 6-tuples as speci ed above. S contains the current machine state . The auxiliary haltstate is de ned in terms of member, which complies with the recursive form linrec. haltstate((Left; Right; S); M) member((S; halt; ; ; ; ); M): 15
The auxiliary non-recursive predicate update takes care of the manipulation of symbols at the tape during one machine cycle 3. Clearly, with the indicated grouping of arguments into tuples, the compute predicate complies with the above linrec predicate: compute((Left; Right; S); M) a(linrec((haltstate; update(M)); (Left; Right; S); M): This concludes, omitting details of the auxiliary predicates, the proof of universality of the linrec scheme and therefore the foldl operator.
8 More General Recursion Schemes We have shown that the fold list recursion operators suce for coping with the recursions in many common logic programs. On the theoretical side we have shown that a specialization of foldl termed linrec provides unbounded recursion and thereby enables the computation of all computable functions. In spite of this theoretical adequacy, from a more practical point of view more recursion operators are needed to cover recursions found in logic programming practice. We now turn to the more comprehensive class of programs from logic programming practice which falls outside the structural list induction recursion operators of sect. 3, at least when refusing extensive rewriting of the programs. The considered so called problem reduction schemes is a class of generic programs with roots in the general problem solver (GPS) paradigm, covering the divide-andconquer scheme. We are concerned with the identi cation of generic programs in the form of a higher-order predicate which covers these types of programs by instantiation. We therefore extend the repertoire with an additional recursion operator realizing the and-or problem reduction schemes. This class covers metainterpreters (as a case of theorem provers), (top-down) parsers and similar programs.
9 And-Or Problem Reduction Schemes Extending the basic repertoire in sect. 3, the following generalised and/or-reduction scheme is suggested in the form of a recursion predicate. This scheme actualises a goal-reduction approach in which an abstract problem or goal is recursively decomposed. The rst clause takes care of elementary goals If the logic program is obliged to halt after reporting successfully having reached a haltstate a dual nonhaltstate should be included in update. 3
16
(cf. empty goals). The second clause manages the \or-case" in which alternative solutions are sought recursively to the argument goal. The third clause handles the \and-case" in which a goal is decomposed into two subgoals handled rather independently through the double recursion. solve (Elemgoal ; Decomp1 ; Decomp2 ; Comp2 )(G; S 1; S 2; R) Elemgoal (G; S 1; S 2; R): solve (Elemgoal ; Decomp1 ; Decomp2 ; Comp2 )(G; S 1; S 3; R) Decomp1 (G; S 1; G1; S 2); solve (Elemgoal ; Decomp1 ; Decomp2 ; Comp2 )(G1; S 2; S 3; R): solve (Elemgoal ; Decomp1 ; Decomp2 ; Comp2 )(G; S 1; S 4; R) Decomp2 (G; S 1; G1; G2; S 2); solve (Elemgoal ; Decomp1 ; Decomp2 ; Comp2 )(G1; S 2; S 3; R1); solve (Elemgoal ; Decomp1 ; Decomp2 ; Comp2 )(G2; S 3; S 4; R2); Comp2 (G; S 1; R1; R2; R): The parameter G is conceived as the goal, the pair of Si parameters as pre and post values in the solving of a subgoal, and Ri as accumulating result arguments. The above recursion operator tend to be impractical due to its generality. For the case of list-structured goals where, in addition, as is sometimes the case, the result arguments can be done away with, there is essentially as specialization the more handy variant solvel(ist): solvel (Elemgoal ; Expand )(G; S 1; S 2) Elemgoal(G; S 1; S 2): solvel (Elemgoal ; Expand )(G; S 1; S 3) Expand(G; S 1; Gl; S 2); solvel (Elemgoal ; Expand )(Gl; S 2; S 3): solvel (Elemgoal ; Expand )(G:Gl; S 1; S 3) solvel (ElemGoal ; Expand )(G; S 1; S 2); solvel (Elemgoal ; Expand )(Gl; S 2; S 3): The argument predicate Expand (cf. above Decomp1) is thought of as a predicate for applying a rule to a (non-list) goal, yielding a list of sub-goals. The concrete (meta)logic programming form of this operator using the apply predicate a is solvel((Elemgoal,Expand),G,S1,S2) a(Elemgoal,G,S1,S2). solvel((Elemgoal,Expand),G,S1,S3) a(Expand,G,S1,Gl,S2),solvel((Elemgoal,Expand),Gl,S2,S3). solvel((Elemgoal,Expand),[GjGl],S1,S3) solvel((Elemgoal,Expand),G,S1,S2), solvel((Elemgoal,Expand),Gl,S2,S3). Of course other variant operators may be extracted from the solve scheme according to needs. However, the versatility of the simpli ed solvel(ist) scheme is
demonstrated with the examples below.
17
9.1 Top-down Parser Consider logic programs for parsing of a string using a BNF grammar (assumed non-left recursive) represented as a list gram of pairs giving the relationships between left hand sides and right hand sides of the productions. A top-down recursive descent parsing with the start symbol as initial goal may be formulated as follows in ordinary logic programming: parse([]; S; S): parse(T; [TjS]; S): parse(N,S1,S2) member((N,Rhs),gram), parse(Rhs,S1,S2). parse([XjT]; S1; S3) parse(X; S1; S2); parse(T; S2; S3): Here the rst argument is the sentential form (a single terminal or nonterminal symbol or a possibly empty list of symbols) covering the dierence list constituted by the second and third pre and post value argument. Using the solvel(ist) recursion operator and at the same time abstracting the grammar as parameter this becomes: parse (Gram; Start; String) solvel (empty goal ; applyrule (Gram))(Start; String; nil ): where the argument predicate applyrule accepts the grammar as argument. The goal in the parser takes form of a sentential form, i.e., a list of symbols (terminals and non-terminals) from which the given string is to be derived according to the grammar productions. The argument predicates are speci ed as follows: empty goal ([]; S; S ): applyrule (Gram)(G; G:S; []; S ): applyrule (Gram)(G; S; Gl; S ) member ((G; Gl); Gram): The rst clause for applyrule covers the case of the goal symbol being a terminal symbol, the second clause covers the case of a non-terminal symbol. This parser logic program again exempli es the HOLP technique parameterization with component programs: For instance choice of another representation of the grammar, say, with a binary tree, is managed by replacement of the applyrule argument, leaving unaected the de ning clause for parse.
9.2 A Non-ground Metainterpreter The proposed recursion operator solvel is reminiscent of a non-ground metainterpreter, cf. e.g., [23], for pure prolog. The ordinary double recursive de nition of the vanilla interpreter is vanilla([]). vanilla(A)
clause(A,B),vanilla(B).
18
vanilla([AjAl])
vanilla(A), vanilla(Al). This program is obtained from solvel as follows. vanilla(G) solvel((empty list,clause),G, , ). a(clause,A, ,B, ) clause(A,B) a(empty list,[], , ).
9.3 A Logic Program Metainterpreter Let us now outline an interpreter for pure logic programs employing ground representation of object logic programs, cf. e.g., [23]. We conduct a top-down elaboration of the metainterpreter employing the introduced recursion operators. The ground metainterpreter program can be expressed similarly to the parser: interpreter (LP; Goal ; Sub ) solvel (empty goal ; unifyrule (LP ))(Goal ; ([]; []); (Vno ; Sub )): which evidences the kinship between recursive descent parsing and metainterpretation (SLD-resolution), the applyrule predicate being replaced with unifyrule. The latter takes the object logic program LP (a list of clauses replacing the list of productions) as argument. Terms of object programs are represented by { const (C ) for constants, { var (Vno ) for variables, where Vno is a variable natural number []; [[]]; [[]; []]; : : : representing 0; 1; 2; : : : cf. with sect. 4.1., { lists of terms for compound terms as well as for atomic formulae and clauses. unifyrule (LP )(G; (Vno 1; Sub 1); Gl; (Vno 2; Sub 2)) member (Clause ; LP ); rename (Clause ; Vno 1; (A:Gl); Vno 2); unify (A; G; Sub 1; Sub 2): The pairs (Vno ; Sub ) consist of the least number available for variables for renaming Vno, and a substition Sub. The substitutions take the form of lists of pairs (Vno ; t), where t is a term. The uni cation and the other auxiliaries are handled using the solvel operator. The programs are available in the appendix. The reformulation of the metainterpreter for pure prolog as a tour de force with the solvel scheme provides evidence of the versatility of the solvel operator and accompanying derivatives. On the theoretical side it is an indirect proof that solvel accommodates (all) general recursive functions and hence goes beyond the bounded recursion fold schemes.
19
9.4 Divide-and-Conquer Scheme as Derivative There is another useful specialization of the general solve operator, namely the so called divide and conquer scheme e.g. [20, 21, 22], see also [8]: divconq (Elemgoal ; Decomp ; Comp)(G; R) Elemgoal (G; R) divconq (Elemgoal ; Decomp ; Comp)(G; R) Decomp (G; G1; G2) divconq (Elemgoal ; Decomp; Comp )(G1; R1) divconq (Elemgoal ; Decomp; Comp )(G2; R2) Comp (G; R1; R2; R) with an omitted or-branching compared with the and-or problem reduction scheme, as well as with discarded pre and post parameters. This operator aords logic programs such as attening of lists and quicksort as shown in [8].
9.5 Ackermann's Function Finally, we give an example of a logic program that calls for the expressive power of solve. ackermann([],N,[[]jN]). ackermann([[]jM],[],V) ackermann(M,[[]],V). ackermann([[]jM],[[]jN],V) ackermann([[]jM],N,V1), ackermann(M,V1,V).
It is represented as
ackermann(M,N,V) solve((elem,decomp1,decomp2,comp2),M,N,V). elem([],N,[[]jN], ). decomp1([[]jM], ,M,[[]]). decomp2([[]jM],[[]jN],[[]jM],M,N). comp2( , , , ).
10 Summary and Conclusion Actually, the above basic recursion operators including the fold schemes may be derived as specializations of the solve operator. Therefore it is our contention that the solve operator is a most general program scheme, an universal scheme as it were. From solve the more special program schemes needed in practice eectively are obtainable as instantiations, though for practical reasons they are de ned independently as in sect. 3. The below table summarises the dependency relationships between the proposed operators (higher-order predicates), the hierarchy of derived predicates (parameterised programs), and sample programs in the rightmost column.
20
Operator Derived Operators linrec mu linrec foldl
foldr
exists all natrec natrec
primrec
divconq solve
Programs (general rec. functions) compute (univ. Turing mach.) ndfa (nondet. n. state aut.) reverse, greaterthan modulus, gcd member nat factorial plus, times, exp list, append, sux, pre x, length, delete, permutation parse vanilla (interpreter) (ground) interpreter Ackermann's function
solvel
Recall that mu is the minimalization operator from the theory of recursive functions, compute is the logic program for emulating the universal Turing machine, and interpreter is the ground metainterpreter for de nite clauses. On basis of this study we conjecture that the above-introduced operators suce for covering the recursions encountered in practical logic programs, if necessary after a little rewriting. Of course we have only evidence, no proof of this conjecture. Therefore we challenge the reader to try to falsify the conjecture by producing as counterexample a realistic pure logic program which cannot be formulated in a reasonable way in the present framework of recursion operators. Let us add that as eventual fall back, however, we can refer to the above metainterpreter, accepting any candidate logic program as term data. In the context of functional programming theory the recursion fold schemes have been generalised to the concepts of catamorphism and paramorphism [24, 25]. These are functions over an underlying recursive data type whose de nitional recursion pattern re ects that of the data type. These notions come with laws intended to assisting the calculation of programs from speci cations. It is an open problem how these notions can be adapted to logic programming. In functional programming methodology there has moreover been suggestions for generalising functions to relations with the aim of establishing algebraic rules for relational program derivation [26]. Since this work is restricted to binary predicates it is not obvious how to connect with logic programming. In the companion paper [15] we address the derivation of operational logic programs from declarative ones by considering data ow patterns for logic programs expressed by means of the recursions predicates. Obviously the recursion schemes ease data ow analysis by restricting the recursion patterns to xed forms. This facilitates automatic transformation of declarative, possibly non-terminating pro21
gram solutions to declaratively equivalent terminating solutions for speci ed data
ow patterns. To this end is provided duality theorems, which enable mutual replacement of recursion operators, rendering super uous an exhaustive program analysis. The duality theorems yield the following relations between the fold operators foldr (P; Q)(Y; L; W ) foldl (p(P ); q(Q))(W; L; Y ): foldl (P; Q)(Y; L; W ) foldr (p(P ); q(Q))(W; L; Y ): where argument predicates p; q swap their arguments appropriately, i.e., p and q are de ned by q(Q)(Z; L; Y ) Q(Y; L; Z ): p(P )(X; W; W 1) P (X; W 1; W ): This means that from a purely declarative point of view foldl can be dispensed with given foldr and vice versa. However, replacement of one fold operator by the other according to this theorem leads to programs behaving procedurally dierently. This is applied as a HOLP programming re nement methodology in [15] for automatically selecting a procedurally appropriate form of the program also honouring the termination obligation.
Acknowledgements The second author gratefully acknowledges a grant from
Uppsala University. Thanks are also due to the anonymous referees for detailed comments, further references and suggestions for future research.
References [1] Gegg-Harrison, T.S., \Representing Logic Program Schemata in Prolog," in Proceedings of the Twelfth International Conference on Logic Programming 1995, (Sterling, L., ed.), MIT Press, London, pp. 467{481, 1995. [2] Marakakis, E. and Gallagher, J. P., \Schema-Based Top-Down Design of Logic Programs Using Abstract Data Types," in Fribourg, L. and Turini, F. (eds.) Logic Program Synthesis and Transformation { Meta-Programming in Logic, LNCS 883, Springer-Verlag, 1994. [3] Sterling, L. and Kirschenbaum, M., Applying Techniques to Skeleton, in [4]. [4] Jacquet, J.-M., (ed.), Constructing Logic Programs, Wiley, 1993. [5] Bird, R. and Wadler, Ph., Introduction to Functional Programming, Prentice Hall, 1988. [6] Reade, C., Elements of Functional Programming, Addison-Wesley, 1989. [7] Warren, D. H. D., \Higher-order extensions to PROLOG: are they needed ?," D. Michie (ed.): Machine Intelligence 10, Ellis Horwood and Edinburgh University Press, pp. 441-454, 1982.
22
[8] Nilsson, J. Fischer and Hamfelt, A., \Constructing Logic Programs with Higher Order Predicates," in Proceedings of GULP-PRODE'95, the Joint Conference on Declarative Programming 1995, (Alpuente, M., and Sessa, M. I. eds.), Universita Degli Studi di Salerno, Salerno, pp. 307{312, 1995. [9] Sterling, L. and Shapiro, E., The Art of Prolog, MIT Press, 1986, 1991. [10] Andrews, P. B., An Introduction to Mathematical Logic and Type Theory: To truth through Proof, Academic Press, 1986. [11] Miller, D. A. and Nadathur, G., \Higher-order Logic Programming," in Proceedings of the Third International Logic Programming Conference, LNCS 225, SpringerVerlag, 1986. [12] Chen, W., Kifer, M. and Warren, D. S., \HiLog: A Foundation for Higher-Order Logic Programming," J. Logic Programming, Vol. 15, 1993. pp. 187-230. [13] Backus, J., \Can Programming be Liberated From the von Neumann Style ? A Functional Style and its Algebra of Programs," Comm. of the ACM, Vol. 21, No. 8, pp. 613-641, 1978. [14] Bird, R., \Lectures on Constructive Functional Programming," in Constructive Methods in Computing Science, (Broy, M., ed.), Springer-Verlag, 1989, pp. 151{ 216. [15] Hamfelt, A. and Nilsson, J. Fischer, \Declarative Logic Programming with Primitive Recursive Relations on Lists," in Proceedings of the Joint International Conference and Symposium on Logic Programming, (Maher, P., ed.), MIT Press 1996. Forthcoming. [16] Kleene, S. C., Introduction to Metamathematics, Amsterdam, 1952. [17] Boolos, G. S. and Jerey, J. C., Computability & Logic, Cambridge U. P., 1974. [18] Kleene, S. C., Mathematical Logic, Wiley, 1967. [19] Tarnlund, S.- A., \Horn Clause Computability," BIT 17, pp. 215-226, 1977. [20] Smith, D. R., \The Design of Divide and Conquer Algorithms," Science of Computer Programming, 5, pp. 37-58, 1985. [21] Flener, P., Logic Program Synthesis from Incomplete Speci cation, Kluwer, 1995. [22] Flener, P. and Deville, Y., \Synthesis of Composition and Discrimination Operators for Divide-and-Conquer Programs," in [4]. [23] Nilsson, U. and Maluszynski, J., Logic, Programming and Prolog, Wiley, 1990. [24] Meijer, E., Fokkinga, M. and Paterson, R., \Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire," in Proceedings of Fifth ACM Conference on Functional Programming Languages and Computer Architecture, FPCA'91, (Hughes, J., ed.), LNCS 523, Springer-Verlag, 1991, pp. 124{144. [25] Meertens, L., \Paramorphism," Formal Aspects of Computing, Vol. 4, No. 5, 1992, pp. 413{424.
23
[26] Bird, R. and de Moor, O., \Relational Program Derivation and Context-free Language Recognition," in A Classical Mind, (Roscoe, A. W., ed.), Prentice Hall, 1994, pp. 17{35. pp. 151{216.
24
A Auxiliaries for the Ground Metainterpreter The substitutions are kept in reduced form, that is to say, (1) there are no two entries (Vno ; t) and (Vno 0; t0 ) with Vno = Vno 0, and (2) there is no entry (Vno ; var (Vno 0)), such that Vno 0 has an entry in the substitution. We insist on reduced substitutions in order to avoid unbounded recursion at look up in a substitution table. This enables look up to be realized by the fold schemes as shown below. Unify applies the input substitutions to both terms and then uni es them to obtain the new substitutions.
unify(T1,T2,Sub1,Sub2) apply subst(T1,Sub1,Ta), apply subst(T2,Sub1,Tb), unify term(Ta,Tb,Sub1,Sub2). Apply substitutions looks up the substitution for variables. Its ordinary logic pro-
gram double recursive de nition is
apply subst([], ,[]). apply subst(const(C), ,const(C)). apply subst(var(V),Sub,T) lookup(V,Sub,T). apply subst([T1jL1],Sub,[T2jL2]) apply subst(T1,Sub,T2),apply subst(L1,Sub,L2). As a derivative of divconq we introduce applyrec applyrec (P )([]; []): applyrec (P )(X:L; Y:M ) applyrec (P )(X; Y ); applyrec (P )(L; M ): applyrec (P )(X; Y ) P (X; Y ):
by
applyrec (P )(L; M ) divconq (elemgoal (P ); decons ; comp )(L; M ): elemgoal (P )([]; []): elemgoal (P )(X; Y ) P (X; Y ): decons (X:T; X; T ): comp ( ; Y; T; Y:T ): With applyrec available apply substitutions can be de ned as apply subst(T1,Sub,T2) applyrec((substitute(Sub)),T1,T2). a(substitute(Sub),const(C),const(C)). a(substitute(Sub),var(V),T) lookup(V,Sub,T). Unify term extends the given substitutions with the new substitutions. Its ordinary logic program double recursive de nition is unify term([],[],Sub,Sub). unify term(const(C),const(C),Sub,Sub). unify term(var(V),T,Sub1,Sub2)
25
not occur(V,T),extend subst(Sub1,(V,T),Sub2). unify term(T,var(V),Sub1,Sub2) not occur(V,T),extend subst(Sub1,(V,T),Sub2). unify term([T1jL1],[T2jL2],Sub1,Sub2) unify term(T1,T2,Sub1,Sub0), unify term(L1,L2,Sub0,Sub2). With solvel it can be de ned as unify term(T1,T2,Sub1,Sub2) solvel((empty goal,expand),(T1,T2),Sub1,Sub2). a(empty goal,[],Sub,Sub). a(empty goal,(const(C),const(C)),Sub,Sub). a(empty goal,(var(V),T),Sub1,Sub2) not occur(V,T),extend subst(Sub1,(V,T),Sub2). a(empty goal,(T,var(V)),Sub1,[],Sub2) not occur(V,T),extend subst(Sub1,(V,T),Sub2). a(expand,(L1,L2),Sub,PairList,Sub) zip(L1,L2,L3). All the auxiliaries above can be handled with the recursion schemes solvel and fold, as shown below. Not occur checks that a variable does not occur in the term substituting it. Its
ordinary logic program double recursive de nition is
not occur( ,[]). not occur(V,const(C)). not occur(V1,var(V2)) noteq(V1,V2). not occur(V,[TjL]) not occur(V,T),not occur(V,L). With applyrec it can be de ned as not occur(V,T) applyrec((checknotoccur(V)),T, ). a(checknotoccur(V),const(C), ). a(checknotoccur(V),var(V2), ) noteq(V,V2). The curried predicate checknotoccur (V ) checks if V is absent in the term argument. Rename sees to it that variables in the chosen clause are standardized apart from
those in the goal list. Its ordinary logic program double recursive de nition is
rename([],Vno,[],Vno). rename([H1jT1],Vno1,[H2jT2],Vno2) rename(H1,Vno1,H2,Out1Vno1), rename(T1,Vno1,T2,Out2Vno1), maximum(Out1Vno1,Out2Vno1,Vno2). rename(const(C),Vno,const(C),Vno). rename(var(V1),Vno1,var(V2),V2) add(V1,Vno1,V2). With solvel it can be de ned as rename(Clause,VnoIn,NewClause,VnoOut) solvel((empty goal,pn(VnoIn)),Clause,([],VnoIn),(NewClause,VnoOut)). a(empty goal,[],S,S).
26
a(pn(VnoIn),const(C),(Pr,MaxSoFar),[],(Res,MaxSoFar)) append(Pr,[const(C)],Res). a(pn(VnoIn),var(V1),(Pr,MaxSoFar),[],(Res,NewMax)) add(V1,VnoIn,V2),append(Pr,[var(V2)],Res), maximum(MaxSoFar,V2,NewMax). Extend substitution adds the new substitution to the input substitution list and
assures that the output list contains no entries (T 1; V ) and (V; T 2) extend subst(Sub1,(V,T),Sub2) replace(Sub1,(V,T),Sub2), insert in sub((V,T),Sub1,Sub2)
The ordinary logic program de nition of replace is
replace([], ,[]). replace([(X,V)jSub1],(V,T),[(X,T)jSub2]) replace(Sub1,(V,T),Sub2). replace([(X,T1)jSub1],(V,T),[(X,T1)jSub2]) di terms((V,T1), replace subst(Sub1,(V,T),Sub2). With foldr it can be expressed as replace(Sub1,S,Sub2) foldr((p(S),q), ,Sub1,Sub2). a(p((V,T)),(X,V),Zs,[(X,T)jZs]). a(p((V,T)),(X,T1),Zs,[(X,T1)jZs]) di terms(V,T1). a(q, ,[]). di terms(Vno1,var(Vno2)) noteq(Vno1,Vno2). di terms(Vno,const(C)). We have already seen that plus, greater than, maximum and zip can be straightforwardly handled with the fold schemes. Noteq can be expressed as noteq(X,Y) greater than(X,Y) ; greater than(Y,X).
The following auxiliaries can be handled as
look up(V,Sub,T) member((V,T),Sub). look up(V,Sub,V) nonmember((V,T),Sub). nonmember((V,T),Sub) all(di terms(V),Sub). a(di terms(V1),(V2,T)) di terms(V1,V2),di terms(V1,T). Finally insert in substitutions is represented as insert in sub((V,T),Sub1,[(V,T)jSub1]) nonmember((V, ),Sub1). insert in sub((V,T),Sub1,Sub1) member((V, ),Sub1).
27
|||||||||||||||||||||||||{ A fundamental problem reduction scheme is solve(nil): solve(X:Xl) rule(X; Y l); append(Y l; Xl; Zl); solve(Zl): which treats a list of abstract "problems" by converting them into (possibly empty lists) of subproblems by means of a rule predicate being speci c to the problem context. As a variant we give the dierence list version, which avoids the computationally inecient concatenation with append: solve(Xl ? Xl): solve(X:Xl ? Y l) rule(X; Y l ? Zl); solve(Xl ? Zl): The former problem reduction scheme may be rephrased as solve(nil): solve(Xl) derive(Xl; Y l); solve(Y l): replacing the rule with the problem speci c predicate derive. This reformulation renders solve evidently amenable to recursion elimination by means of the linrec recursion operator. This reformulation moreover suggests an alternative problem solving scheme solve0 (Xl) transformation (Xl; nil): transformation (Xl; Xl): transformation (Xl; Zl) derive(Xl; Y l); transformation (Y l; Zl): which is recognized as the re exive and transitive closure of derive.
References [1] M.H.M. Cheng, M.H. van Emden, & B. E. Richards: On Warren's Method for Functional Programming in Logic, Procs. of the 7th International Conference on Logic Programming, D.H.D Warren & P. Szeredi (eds.), The MIT Press, 1990. pp. 546-560. [2] Y. Deville & K.-K. Lau: Logic Programmming Synthesis, J. Logic Programming, 19/20, 1994. pp. 321-350. [3] O.-J. Dahl, E.W. Dijkstra & C.A.R. Hoare: Structured Programming, Academic Press, 1972. [4] T.S. Gegg-Harrison: Basic Prolog Schemata, CS-1989-20, Dept. of Computer Science, Duke University, 1989. [5] Hamfelt, A. and Nilsson, J. Fischer, \Inductive Metalogic Programming", in Proceedings of Workshop on Inductive Logic Programing 1994, S. Wrobel (ed.), Bad/Honnef/Bonn 1994, GMD-Studien Nr. 237, ISSN 0170-8120, 1994. 28
[6] M. Hanus: Polymorphic Higher-order Programming in Prolog, Proceedings of the 6th International Conference on Logic Programming, [7] M. Kirschenbaum & L. S. Sterling: Re nement Strategies for Inductive Learning of Simple Prolog Programs, Procs. of the 12th Int. Conf. on Arti cial Intelligence, IJCAI-91, Sydney, 1991. pp. 757-761. [8] L. Plumer: Termination Proofs for Logic Programs, Lecture Notes in Arti cial Intelligence 446, Springer, 1990. [9] U. S. Reddy: Higher-Order Aspects of Logic programming. Extensions of Logic Programming, Procs. of the 4th International Workshop, ELP'93, St Andrews, U.K., LNAI 798, Springer-Verlag, 1993. pp. 301-321.
29