by ESPRIT Basic Research Action 3020 and by the Italian CNR (Consiglio Nazionale delle ...... Since is a mgu for fx1 = t1;:::;xn = tng, by (23) we have A1. A1.
Centrum voor Wiskunde en Informatica
REPORTRAPPORT
A Declarative Approach for First-Order Built-in’s of Prolog K.R. Apt, E. Marchiori, C. Palamidessi Computer Science/Department of Software Technology
CS-R9246 1992
A Declarative Approach for First-Order Built-in's of Prolog Krzysztof R. Apt CWI P.O. Box 4079, 1009 AB Amsterdam, The Netherlands and Faculty of Mathematics and Computer Science, University of Amsterdam, The Netherlands Elena Marchiori CWI P.O. Box 4079, 1009 AB Amsterdam, The Netherlands Catuscia Palamidessi Dipartimento di Informatica e Scienze dell' Informazione, Universita di Genova, Italy
Abstract
We provide here a framework for studying Prolog programs with various built-in's that include arithmetic operations, and such metalogical relations like var and ground. To this end we propose a new, declarative semantics and prove completeness of the Prolog computation mechanism w.r.t. this semantics. We also show that this semantics is fully abstract in an appropriate sense. Finally, we provide a method for proving termination of Prolog programs with built-in's which uses this semantics. The method is shown to be modular and is illustrated by proving termination of a number of programs including the unify program of Sterling and Shapiro [SS86]. 1991 Mathematics Subject Classi cation: 68Q40, 68T15, 1991 CR Categories: F.3.2., F.4.1, H.3.3, I.2.3. Keywords and Phrases: Prolog programs, built-in's, declarative semantics, termination. Notes This research was partly done during the third author' s stay at Centre for Mathematics
and Computer Science, Amsterdam. The work of K.R. Apt was partly supported by ESPRIT Basic Research Action 6810 (Compulog 2). The work of C. Palamidessi was partly supported by ESPRIT Basic Research Action 3020 and by the Italian CNR (Consiglio Nazionale delle Ricerche). The work of E. Marchiori was partly supported by ESPRIT Basic Research Action 6810 (Compulog 2) and by the Italian CNR under Grant No. 89.00026.69. A short version of this paper appeared as [AMP92].
1 Introduction
1.1 Motivation
Theory of logic programming allows us to treat formally only pure Prolog programs, that is those whose syntax is based on Horn clauses. Any formal treatment of more realistic Prolog programs 1
has to take into account the use of various built-in's. Some of them, like arithmetic relations, seem to be trivial to handle, as they simply refer to some theory of arithmetic. However, the restrictions on the form of their arguments (like the requirement that both arguments of < should be ground) cause complications which the theory of logic programming does not properly account for. In particular, in presence of arithmetic relations the independence of the refutability from the selection rule fails, as the goal x = 2; 1 < x shows. Further, the use of metalogical relations (like var; ground) leads to various additional problems. Clearly, var cannot be handled using the traditional semantics based on rst-order logic because var(x) is true whereas some instances of it are not. In presence of nonvar another complication arises: the well-known Lifting Lemma (see Lloyd [Llo87]) used to prove completeness of the SLD-resolution does not hold | for a non-variable term t the goal nonvar(t) can be refuted whereas its more general version nonvar(x) cannot. Finally, study of termination of Prolog programs in presence of the above built-in's calls for some new insights. For example, the program list list([]) . list([X|Xs]) nonvar(Xs), list(Xs).
which recognizes a list, always terminates, whereas its pure Prolog counterpart obtained by dropping the atom nonvar(Xs) may diverge. The aim of this paper is to provide a systematic account of the class of the above mentioned built-in's of Prolog. This class includes the arithmetic relations (like +; < etc.) and some metalogical relations (like var; ground etc.). To distinguish them from those built-in's which refer to clauses and goals (like call and assert), we call them rst-order built-in's. Hence the title. The main tool in our approach is a new, non-standard declarative semantics which associates with each relation symbol input and output substitutions. It is introduced in Section 2. We also prove there a completeness result connecting this semantics with the Prolog computational mechanism. We show that this semantics is a natural extension of the S-semantics by Falaschi et al. [FLMP89], in the sense that it is isomorphic to the S-semantics for pure Prolog programs. Moreover we show that our semantics is in a sense the most simple extension, by proving that it is fully abstract w.r.t. goals conjunctions. This semantics is crucial for the study of termination of Prolog programs that use the rstorder built-in's. Our approach to this subject combines the use of the level mapping functions (that assign elements from a well-founded set to atoms) with the above semantics. In this respect it is thus similar to that of Apt and Pedreschi [AP90] which called for the use of level mappings assigning natural numbers to ground atoms, and declarative semantics. However, important dierences arise due to the presence of built-in's. First, we have to analyze the original program and not its ground version. Second, in presence of rst-order built-in's it seems natural to study programs that terminate for all goals and not only for all ground goals as in Apt and Pedreschi [AP90]. So dierent characterization results are needed. These issues are dealt with in Section 3 where we also show how termination of Prolog programs with rst-order built-in's can be dealt with in a modular way. In Section 4 we apply our approach to termination to prove termination of the above list program, the typed version of the append program and a version of the unify program of Sterling and Shapiro [SS86]. We are aware of two other approaches to de ne the meaning of Prolog rst-order built-in's, namely that of Borger [Bor89] based on so-called dynamic algebras, and that of Deransart and 2
Ferrand [DF87] based on an abstract interpreter. Their aim is to provide semantics to the complete Prolog language whereas ours is to extend the declarative semantics to Prolog programs with rst-order built-in's so that one can reason about such programs. In this respect our approach has the same aim as that of Hill and Lloyd [HL88] where all metalogical features of Prolog are represented in a uniform way by means of a representation of the object level in the meta-level, reminiscent of the Godelization process in Peano arithmetic.
1.2 Preliminaries
In what follows we study logic programs extended by various built-in relations. We call the resulting objects Prolog programs, or simply programs, and identify pure Prolog programs with logic programs. Prolog programs can be executed by means of the LD-resolution, which consists of the usual SLD-resolution combined with the leftmost selection rule, that is appropriately extended to deal with the built-in relations. By length l( ) of an LD-derivation we mean the number of its goals. We often manipulate various sets of variables. In general x; y stands for sequences of dierent variables. Sometimes we identify such sequences with sets of variables. Given a substitution and a set of variables x we denote by j x the substitution obtained from by restricting its domain, Dom (), to x. By Ran () we denote the set of variables that appear in the terms of the range of . A renaming is a substitution that is a permutation of the variables constituting its domain. Recall that an mgu of A and B is idempotent if = and is relevant if Ran() V ar(A; B ). The relation more general than de ned on pairs of atoms, terms or substitutions is denoted by . Let s be a term. Then si denotes the i-th argument of s, when it is de ned, nodes(s) denotes the number of nodes of s in the tree representation, a(s) denotes the arity of the principal functor of s and funct(s) denotes its function symbol. It is convenient to associate with each pair of terms that unify a unique idempotent (hence relevant) mgu in the sense of Apt [Apt90][page 502]. Given such a pair s, t we denote it by mgu(s; t). Further, we associate with each pair of sequences of terms that unify a unique idempotent (hence relevant) mgu de ned as follows.
mgu((); ()) = , where () indicates the empty sequence; mgu((s; s); (t; t)) = mgu((s); (t)), where = mgu(s; t). We write mgu(s; t) instead of mgu((s); (t)). It is not dicult to show that mgu(s; t) is indeed an idempotent mgu of s and t. Then we associate with each pair of atoms A and B that unify mgu(s; t), where s and t are the sequences of arguments respectively of A and B . We denote
this mgu by mgu(A; B ). Given an expression (term, atom, goal,: : : ) or a substitution E we denote the set of variables occurring in it by Var (E ). We often write j E to denote j Var (E ). The set of all variables is denoted by Var . Atoms of the form p(x) where p is a relation are called elementary atoms and atoms containing a built-in relation are referred to as built-in atoms. Finally, atoms containing a relation used in a head of a clause of a program P are said to be de ned in P . In the context of logic programs, or more generally, Prolog programs, it is convenient to treat sequences of atoms as conjunctions (sometimes called conjuncts). Usually A, B denote such conjuncts. 3
The rest of the used notation is more or less standard and essentially follows Lloyd [Llo87]. Recall that, if 1 ; : : : ; n are the consecutive mgu's along a refutation of a goal G in the program P , then the restriction (1 : : : n ) j Var (G) of 1 : : : n to the variables of G is called computed answer substitution (c.a.s. for short) of P [ fGg. In this paper we also associate c.a.s.'s with pre xes of LD-derivations in the obvious way. These pre xes of LD-derivations are also called partial derivations.
2 The declarative semantics
2.1 Motivation
In this section we de ne a declarative semantics appropriate to describe the operational behaviour of Prolog programs. First, let us see why it is impossible to achieve this goal by simply modifying one of the usually considered declarative semantics. The standard declarative semantics, based on the (ground) Herbrand models due to van Emden and Kowalski [vEK76], is clearly inadequate to deal with rst-order built-in's. Indeed, in this semantics in a given interpretation if an atom is true then all its ground instances are. However, for every ground term t, var (t) should be false in every model whereas var (x) should be true. Therefore we say that var is a non-monotonic relation. We conclude that any declarative modeling of non-monotonic relations requires an explicit introduction of non-ground atoms in the Herbrand interpretations, in order to de ne the truth value of an atom independently from its ground instances. The rst declarative semantics based on non-ground atoms was given by Clark [Cla79], with the aim of de ning the validity of open atoms (like p(x)) in terms of their truth value in the least Herbrand model. Successively, other declarative models based on non-ground atoms were investigated in Falaschi et al. [FLMP89]: the C-semantics - which was shown to be equivalent to Clark's semantics, and the S-semantics. However, all these models are not suitable for Prolog programs, because | like the standard semantics of van Emden and Kowalski [vEK76], the resulting de nition of truth treats the body of a clause as a logical conjunction - i.e. the `,' is interpreted as an `and', and this means that the order of the literals in the body is irrelevant. On the other hand, the presence of built-in relations - in particular of the non-monotonic ones, makes this order relevant. Consider for instance P1 : p(X) var(X), q(X). q(a) . and P2 : p(X) q(X), var(X). q(a) . The behavior of the goal p(x) in these programs is dierent (in P1 it succeeds, whereas in P2 it fails). In other words, the independence from the selection rule, and the Switching Lemma of Lloyd [Llo87] do not hold for Prolog programs. If we want to characterize declaratively the operational behaviour of goals, we must therefore describe the meaning of `,' in the body of clauses in a non-commutative way, more precisely, we have to mimic the leftmost selection rule of Prolog. However, the intended model cannot be obtained simply by modifying the interpretation of `,' in the C-semantics. The reason is that the domain structure of the C-semantics is too poor: it 4
does not allow us to model the meaning of non-monotonic relations. Indeed, in the C-semantics the interpretations are upward closed, that is, if A belongs to (is true in) an interpretation I , then all its instances belong to I , as well. On the other hand, in the S-semantics the interpretations are not upward closed. However, the S-semantics is monotonic, that is A is true in an interpretation I if a more general version of A belongs to I . Moreover, in presence of built-in relations like nonvar, another problem arises: the goal nonvar (x) fails whereas for every non-variable term t the goal nonvar (t) succeeds. Therefore we say that nonvar is a non-down-monotonic relation. Due to the presence of non-downmonotonic relations the Lifting Lemma (see Lloyd [Llo87]) does not hold for Prolog programs. Consider for instance P3 : p(X) nonvar(X). With this program for every non-variable term t, the goal p(t) has a refutation, whereas p(x) fails. This example shows that it is not sucient to identify the meaning of a relation p with the set of (computed answer) substitutions which p is able compute - in a sense, the post-conditions which are veri ed after the possible executions of the goal p(x). We also need a pre-condition, i.e. information about the substitution by which the atom p(x) is instantiated before starting the computation. A possible way to do it is by enriching the domain with another component, thus explicitly representing the substitution before execution.
2.2 -semantics
This leads us to consider objects of the form h; p(x); i, where represents the pre-substitution (or input substitution) and represents the post-substitution (or output substitution) for the goal p(x). For technical convenience we equivalently represent these triples as pairs of the form hA; i, where A is the atom obtained by the application of the input substitution to the elementary atom p(x), i.e. A = p(x). In Section 2.6 we prove the full abstraction of this model, thus showing that all the information we encode in this semantical structure is in fact necessary. Of course, we can restrict our attention to pairs hA; i in which does not aect the variables that do not appear in A. First, we deal with built-in relations. For any such relation p we stipulate a set [ p] of pairs de ning its operational behaviour. We list here some cases. In the de nition below, \=" is the well-known built-in standing for \is uni able with".
5
[ var ]
= fhvar (x); i j x 2 Var g;
[ nonvar ]
= fhnonvar (s); i j s 62 Var g;
[ =]]
= fhs = t; i j = mgu (s; t)g;
[ >]
= fhs > t; i j s; t are integers and s > tg;
[ constant ]
= fhconstant (a); i j a is a constantg;
[ compound ] = fhcompound (s); i j s is a compound termg; [ functor ]
= fhfunctor(t; f; n); i j Dom () ff; ng; n is a natural number and t = (f)(t1; : : : ; tn ) for some t1 ; : : : ; tn or n is a natural number, f is a functor symbol, Dom () = ftg and t = f (X1; : : : ; Xn ) where X1; : : : ; Xn are fresh variablesg;
[ :=]]
= fhx := s; fx=tgi j x 2 Var and s is a ground arithmetic expression whose value is tg;
[ arg ]
= fharg(n; s; t); i j Dom () ftg and t = sn or Dom () = fsng and sn = tg;
[ n ==]] = fhsn == t; i j s 6= tg: We assume that the set of pairs associated with a built-in relation describes correctly its operational behaviour, in the following sense.
De nition 2.1 Let A be an atom with a built-in relation p. Then for every conjunct B the goal B is a resolvent of A; B i hA; i 2 [ p] . 2 Notice that in our approach we do not distinguish between failures and errors. For example, in Prolog the evaluation of the goal X := Y + 1 will result in an error and not in a (backtrackable) failure. By further re ning the structure of the sets [ p] we could easily incorporate this distinction in the semantics. Next, we consider atoms de ned by the program. First we introduce the following generalization of Herbrand base and Herbrand intepretation.
De nition 2.2 (-domain and -interpretation) Let P be a Prolog program. The -base P of P is the set of all pairs hA; i, where A is an atom de ned in P , and is a substitution s.t. Dom () Var (A). A -interpretation I of P is a subset of the -base P . 2 To de ne the truth in -interpretations we have to model appropriately the proof theoretic properties of the computed answer substitutions. To this end it is important to re ect on them rst. The following lemma relates c.a.s.'s of resolvents of a goal with c.a.s.'s of the goal. It is a consequence of Corollary 3.5 of Apt and Doets [AD92], to which the reader is referred for the proof. 6
Lemma 2.3 (C.a.s.) Consider an atomic goal A with input clause p(x) B. Let = mgu(A; p(x)) be s.t. Dom () = x and let be a c.a.s. of P [ f Bg. Suppose that Ran () \ Var (p(x)) Var (B). Then j A is a c.a.s. of P [ f Ag.
This Lemma provides a sucient condition to guarantee that a c.a.s. of a goal coincides with a c.a.s. of its resolvent on the variables of the goal. Let us give an example showing that this condition is needed. Consider the program P : p(X,Y) q(X). q(X) X=f(Y). and the goal p(X; Y ). Take as input clause p(X 0; Y 0 ) q(X 0 ). Then = mgu(p(X; Y ), p(X 0; Y 0 )) = fX 0=X; Y 0 =Y g and q(X ) is the corresponding resolvent. Now = fX=f (Y )g is a c.a.s. of P [ f q(X )g, but is not a c.a.s. of P [ f p(X; Y )g. De nition 2.4 Let A, B be conjuncts and let and substitutions. We say that (A; B; ; ) is a good tuple if the following conditions are satis ed: (i) Ran () \ Var (B) Var (A) (the variables introduced by that occur in B also occur in A), (ii) Ran () \ (Var (A; B) [ Ran ()) Var (B) (the variables introduced by that occur in A; B or in Ran () also occur in B). 2 The importance of this, admittedly esoteric, notion is revealed by the following lemma. Lemma 2.5 (Good Tuple) Consider a goal A; B. Then is a c.a.s. of P [ f A; Bg i for some and is a c.a.s. of P [ f Ag, is a c.a.s. of P [ f Bg, = () j (A; B), (A; B; ; ) is a good tuple. Proof. The proof is lengthy and tedious and can be found in the appendix. 2 This lemma shows that the c.a.s.'s for a compound goal A; B cannot be obtained by simply composing each c.a.s. for A with each c.a.s. for B. The notion of a good tuple formalizes the conditions that and have to satisfy. Both conditions of De nition 2.4 of Good Tuple are needed: consider for example the program P : p(Z) . and the goal G = p(X ); p(Y ). Then = fX=Y g is a c.a.s. for p(X ), = is a c.a.s. of P [f p(Y )g but () j G = fX=Y g is not a c.a.s. of P [fGg. This shows that the rst condition in De nition 2.4 of good tuple is needed. Now = is also a c.a.s. for p(X ), = fY=X g is a c.a.s. of P [ f p(Y )g but () j G = fY=X g is not a c.a.s. of P [ fGg. This shows that the second condition in De nition 2.4 of good tuple is needed. Since we want to model the meaning of a conjunct w.r.t. a post-substitution in such a way that a precise match with the procedural semantics is maintained, the notion of a good tuple will be crucial also for the semantic considerations. The next step is dictated by the simplicity considerations. We shall restrict our attention to Prolog programs in a certain form. Then, after proving soundness and completeness for these programs, we shall return to the general case. 7
De nition 2.6 (Homogeneous Programs) A Prolog clause is called homogeneous if its head is an elementary atom. A Prolog program is called homogeneous if all its clauses are homogeneous.
2
We now de ne truth in -interpretations for homogeneous programs. It relies on the notion of good tuple. Given a conjunct A of atoms we denote by l(A) its length, i.e. the number of atoms in A. If l(A) = 0 we denote A by true . De nition 2.7 (Truth in -interpretations) Let I be a -interpretation of a homogeneous Prolog program P . The truth of a conjunct A in I w.r.t. a (post-)substitution , denoted by I j= hA; i, is de ned by induction on l(A), the length of A. l(A) = 0. Then A = true . I j= htrue ; i i = . l(A) = 1. Then A = A for an atom A. I j= hA; i i hA; i 2 [ p ] , where A is a built-in atom with the relation symbol p, I j= hA; i i hA; i 2 I ; where A is de ned in P . l(A) > 1. Then A = A; B for an atom A and a non-empty conjunct B . I j= hA; B; i i there exist ; s.t. = () j (A; B) and - I j= hA; i, - I j= hB; i - (A; B; ; ) is a good tuple. The truth of a homogeneous clause H B of P in I , denoted by I j= H B, is de ned as follows. I j= hH B; i i for all s.t. Dom () = Var (H ), Ran () \ Var (H B) = ;, Ran () \ Var (H) Var (B): I j= hB; i implies I j= hH; j H i, I j= H B i for all , I j= hH B; i. I is a -model of P i all variants of the clauses of P are true in I . 2 The following lemmas will be useful to reason about the truth. Lemma 2.8 (Monotonicity) Let I ; J be -interpretations, A a conjunct, and a substitution. If I j= hA; i and I J , then J j= hA; i. Proof. Straightforward by induction on the length of A. 2
Lemma 2.9 (Continuity) Let Ii (i 0) be -interpretations such that I0 I1 : : : . Then for every conjunct A and substitution [1i=0 Ii j= hA; i i for some k 0 Ik j= hA; i: Proof. Straightforward by induction on the length of A and the Monotonicity Lemma 2.8. 2 Note that the Continuity Lemma strengthens the Monotonicity Lemma. 8
2.3 -semantics and LD-resolution
The next step is to show that LD-resolution is correct w.r.t. the -semantics. The proof relies on the Good Tuple Lemma 2.5. The following assumption is convenient.
Assumption 2.10 Whenever in the LD-resolution step the selected atom A is uni ed with the head H of the input clause where H is a pure atom, then the mgu of A and H is s.t. Dom () = Var (H ). By the previous assumption we have A = H.
Theorem 2.11 (Soundness I) Let P be a homogeneous Prolog program and A a conjunct. If is a c.a.s. for P [ f Ag then for any -model I of P we have I j= hA; i. Proof. Fix a -model I of P . Let be a LD-refutation of P [ f Ag with c.a.s. . We prove
the claim by induction on the length l( ) of . Three cases arise. Case 1 l(A) = 0. Then A = true and = , so the claim follows directly by De nition 2.7. Case 2 l(A) = 1. Then A = A for an atom A. If A is a built-in atom, then the claim follows directly by De nitions 2.1 and 2.7. If A is de ned in P , then consider the resolvent B of A in obtained using the input clause H B and mgu . H is a pure atom and by the standardization apart A and H B have no variable in common, so by Assumption 2.10 and
Dom () = Var (H ); Ran () \ Var (H
A = H:
B) = ;;
Let 0 be the c.a.s. for P [ f Bg computed by the sux 0 of starting at
(1) (2) B. Then
= (0) j A:
(3) We have l( 0) = l( ) ? 1, so by the induction hypothesis I j= hB; 0 i. But I is a model of P , so H B is true in I and consequently by (1) and De nition 2.7 I j= hH; 0 j Hi. Thus by (2) I j= hA; 0 j Ai. However, A and H have no variable in common, so by (1) j A = and consequently by (3) = (0 ) j A = 0 j A. So we proved I j= hA; i. Case 3 l(A) > 1. Then A = A; B for an atom A and a non-empty conjunct B. By the Good Tuple Lemma 2.5 there exist and s.t. = () j A and (i) P [ f Ag has an LD-refutation 1 with c.a.s. , (ii) P [ f Bg has an LD-refutation 2 with c.a.s. , (iii) (A; B; ; ) is a good tuple. Moreover by the proof of the same Lemma it follows that we can choose 1, 2 to be subderivations of . Then l(1) < l( ) so by the induction hypothesis
I j= hA; i: 9
(4)
Also l(2) < l( ) so by the induction hypothesis
I j= hB; i: Thus by (iii), (4) and (5) we get I j= hA; i by De nition 2.7.
(5)
2
In order to prove the converse of Theorem 2.11 it is helpful to consider a special -model representing all -models, in the sense that a conjunction is true in it (w.r.t. a given postsubstitution) i it is true in all the -models. The -interpretations are naturally ordered by the set inclusion. In this ordering the least -interpretation is ;, the greatest one is P . Analogously to standard Herbrand models, the -models are closed w.r.t. arbitrary intersections, from which we deduce the existence of the least -model.
Theorem 2.12 Let P be a homogeneous program. Let M be a class of -models of P . Then M = TM I is a model of P . Proof. Let H B be a variant of a clause of P and let , be such that Dom () = Var (H ), Ran () \ Var (H B) = ;, Ran () \ Var (H) Var (B) and M j= hB; i. Fix I 2 M. By the Monotonicity Lemma 2.8 we have I j= hB; i, so since I is a -model, I j= hH; j (H)i. By De nition 2.7 and the fact that I is an arbitrary element of M we conclude M j= hH; j H i. 2
Corollary 2.13 (Least Model) Every homogeneous program P has a least -model, NP . 2 This -model is the intended representant of all -models of P in the following sense.
Corollary 2.14 Let A be a conjunct and be a substitution. Then NP j= hA; i i for all -models I of P we have I j= hA; i. Proof. By the Monotonicity Lemma 2.8. 2 In the theory of Logic Programming the least Herbrand model can be generated as the least xpoint of the immediate consequence operator TP on the Herbrand interpretations. This characterization is useful to establish the completeness of SLD-resolution with respect to the least Herbrand model. We now provide an analogous characterization of the least -model NP in order to show the completeness of the LD-resolution with respect to NP . First, we introduce the appropriate operator TP .
De nition 2.15 Let P be a homogeneous program. The immediate consequence operator TP on the -interpretations is de ned as follows TP (I ) = fhH; j (H)i j for some B H B is a variant of a clause from P , Dom () = Var (H ); Ran () \ Var (H B) = ;; Ran () \ Var (H) Var (B); I j= hB; ig: 10
2
Next, we characterize the -models of P as the pre- xpoints of TP . The following proposition shows this characterization for programs consisting of one clause only. Proposition 2.16 Given a clause C and a -interpretation I , we have that I is a model of fC g i TfC g(I ) I . Proof. For every H , and we have hH; j Hi 2 TfC g(I ) i (by De nition 2.15) H B is a variant of C such that I j= hB; i, Dom () = Var (H ), Ran () \ Var (H B) = ; and Ran () \ Var (H) Var (B). Since I is a model of fC g then this holds i I j= hH; j (H)i, i.e. hH; j (H)i 2 I . 2 To generalize Proposition 2.16 to non-singleton programs we use the following obvious lemma which states the additivity of the operator TP . Lemma 2.17 Let P; P 0 be homogeneous programs. Then for every -interpretation I we have TP [P (I ) = TP (I ) [ TP (I ): 2 Corollary 2.18 (Model Characterization) I is a -model of P i TP (I ) I . 2 Now, we characterize NP as the least xpoint of TP . We need the following observation. Proposition 2.19 (Monotonicity) TP is monotonic, that is I J implies TP (I ) TP (J ). Proof. By the Monotonicity Lemma 2.8. 2 Proposition 2.20 (Least Fixpoint) TP has a least xpoint lfp (TP ) which is also its least pre- xpoint. Proof. By the Monotonicity Proposition 2.19 and Knaster-Tarski Theorem. 2 0
0
We can now derive the desired result. Corollary 2.21 lfp (TP ) = NP . Proof. By the Least Fixpoint Proposition 2.20, Least Model Corollary 2.13 and Model Characterization Corollary 2.18. 2 Finally, we provide a more precise characterization of the -model NP that will be used in the proof of the completeness of the LD-resolution. We need the following strengthening of the Monotonicity Proposition 2.19. Proposition 2.22 (Continuity) TP is continuous, that is for every sequence Ii (i 0) of -interpretations such that I0 I1 : : : we have 1 TP ([1 i=0 Ii ) = [i=0 TP (Ii ): Proof. By the Continuity Lemma 2.9. 2 We de ne now a sequence of -interpretations by
TP " 0 = ;; TP " (n + 1) = TP (TP " n); TP " ! = [ 1 i=0 TP " i: 11
Proposition 2.23 (Characterization) NP = TP " !: Proof. By the Continuity Proposition 2.22 and the Knaster-Tarski Theorem lfp (TP ) = TP " !, 2
so the claim follows by Corollary 2.21.
We can now prove the completeness of LD-resolution with respect to the -semantics for homogeneous programs. Theorem 2.24 (Completeness I) Consider a homogeneous program P and a conjunct A. Suppose that for all -models I of P we have I j= hA; i. Then there exists an LD-refutation of P [ f Ag with c.a.s. . Proof. In particular we have NP j= hA; i. By the Characterization Proposition 2.23 TP " ! j= hA; i. By the monotonicity of TP we have TP " 0 TP " 1 : : :, so by the Continuity Lemma 2.9 TP " k j= hA; i for some k > 0. We now prove the claim by induction w.r.t. the lexicographic ordering < de ned on pairs hk; l(A)i of natural numbers. In this ordering hn1 ; n2 i < hm1; m2i i (n1 < m1) or (n1 = m1 ^ n2 < m2): The case when A is empty, i.e. l(A) = 0 (which covers the base case of the induction) is immediate by De nition 2.7. Suppose now A = A; B. There exist substitutions ; such that TP " k j= hA; i; TP " k j= hB; i; (A; B; ; ) is a good tuple and = () j (A; B). We rst prove that P [ f Ag has an LD-refutation with c.a.s. . When A is a built-in atom this conclusion follows immediately from De nitions 2.1 and 2.7. When A is de ned in P we have k > 0. By De nition 2.15 there exists a variant H B0 of a clause from P , a substitution s.t. Dom ( ) = Var (H ), Ran ( ) \ Var (H B0 ) = ;, A = H and a substitution such that Ran () \ Var (H ) Var (B ); (6) 0 TP " (k ? 1) j= hB ; i and = j A. Since hk ? 1; l(B0 )i < hk; l(A)i, by the induction hypothesis there exists an LD-refutation of P [ f B0 g with c.a.s. . Now notice that Dom ( ) = Var (H ), B0 is a resolvent of A using the mgu and (6) holds. Then by the c.a.s. Lemma 2.3 is a c.a.s. of P [ f Ag. Since hk; l(B)i < hk; l(A)i, by the induction hypothesis also there exists an LD-refutation of P [ f Bg with c.a.s. . Since (A; B; ; ) is a good tuple and = () j (A; B), we can apply the Good Tuple Lemma 2.5. We conclude that there exists an LD refutation of P [f Ag with c.a.s. . 2
Corollary 2.25 Let P be a homogeneous Prolog program. Then NP = fhA; i j A is de ned in P and there exists an LD-refutation of P [ f Ag with c.a.s. g. Proof. By De nition 2.7 and Theorems 2.11 and 2.24.
2
This corollary shows that the -model NP captures precisely the computational meaning of the homogeneous program P . 12
2.4 Extension to arbitrary programs
Now, every program can be easily transformed into a homogeneous program. De nition 2.26 (Homogeneous Form) Let P be a Prolog program. Let x1 ; x2; : : : be distinct variables not occurring in P . Transform each clause p(t1; : : : ; tk ) B of P into the clause p(x1; : : : ; xk ) x1 = t1 ; : : : ; xk = tk ; B: Here = is the built-in discussed in Section 2 and interpreted as \is uni able with". We denote the resulting program by Hom (P ) and call it a homogeneous form of P . 2 We now show that a Prolog program P and its homogeneous form Hom (P ) have the same computational behaviour. Theorem 2.27 (Equivalence I) Let P be a Prolog program, G a goal. Then P [ fGg has a refutation with c.a.s. if and only if Hom (P ) [ fGg has a refutation with c.a.s. . Proof. See Appendix 5.2. 2 Theorem 2.27 allows to reason about the meaning of Prolog programs by transforming them rst to a homogeneous form. Alternatively, we can extend the de nition of the truth to arbitrary programs by simply de ning a clause to be true i its homogeneous version is true. By \processing" then the meaning of the introduced calls to the built-in = we obtain the following direct de nition of truth of a clause. De nition 2.28 I j= hH B; i i for any atom A and a variant H 0 B0 of H B disjoint from A the following implication holds: = mgu(A; H 0), I j= hB0 ; i and Ran () \ (Var (A) [ Var (H 0 B0 )) Var (B0 ) implies I j= hA; j Ai. 2 We now establish the semantics equivalence of a program and its homogeneous form. The following two technical lemmas are useful. Theorem 2.29 (Equivalence II) I is a model of a Prolog program P i it is a model of Hom (P ). Proof. See Appendix 5.3. 2 From the two previous results on operational and semantic equivalence of P and Hom (P ) the soundness and completeness of the LD-resolution for Prolog programs directly follows. Theorem 2.30 (Soundness II) Let P be a Prolog program and A a conjunct. If is a c.a.s. for P [ f Ag then for any -model I of P we have I j= hA; i. Proof. By the Equivalence I Theorem 2.27 and the Equivalence II Theorem 2.29. 2 Theorem 2.31 (Completeness II) Consider a Prolog program P and a conjunct A. Suppose that for all -models I of P we have I j= hA; i. Then there exists an LD-refutation of P [ f Ag with c.a.s. . Proof. By the Equivalence II Theorem 2.29 and the Equivalence I Theorem 2.27. 2 13
2.5 Relation between the -semantics and the S-semantics
In this section we show that the -semantics is the natural extension to Prolog programs of the S-semantics de ned in Falaschi et al. [FLMP89] for logic programs, in the sense that if P is a pure Prolog program (i.e. it does not contain built-in atoms) then the least -model NP coincides with the least S-model SP . To this purpose, it will be helpful to consider the following operational characterization of SP (cf. Falaschi et al. [FLMP89]).
SP = fp(x) j x 2 Var and
p(x) has an LD-refutation with c.a.s. g
or, equivalently,
SP = fhp(x); i j x 2 Var and
p(x) has an LD-refutation with c.a.s. g
(7)
We de ne now some properties on -interpretations which will be shown to hold for NP when P is pure, and which will be useful for proving the correspondence stated above.
De nition 2.32 Let I be a -interpretation. I is called upward-closed i 8hA; i 2 I ; 8 such that 9 = mgu (A; A), we have hA; 0i 2 I ,
where 0 is the restriction of to A. downward-closed i 8hA; i 2 I : 9: 90 = mgu (A; A): hA; i 2 I and is the restriction of 0 to A. 2
Proposition 2.33 Let P be a pure Prolog program. Then NP is upward-closed and downward-
closed.
Proof. By using the characterization of NP expressed by Corollary 2.25 it is sucient to prove the operational counterparts of upward and downward closedness, which when extended to arbitrary conjunctions, are expressed by the following lemma.
Lemma 2.34
1. If the goal Q has a LD-refutation with c.a.s. , then, for each such that 9 = mgu (Q; Q), the goal Q has an LD-refutation with a computed answer substitution 0 which is the restriction of to Q. 2. If the goal such that
Q has a LD-refutation with c.a.s. then there exists and 0 = mgu (Q; Q) Q has a LD-refutation with c.a.s. and is the restriction of 0 to Q.
Proof. This lemma can be proved by using the correctness and completeness theorems for the
S-semantics (cf. Falaschi et al. [FLMP89]), but we prefer here to give an independent purely operational proof. We prove (1) by induction on the length of the refutation. n = 1) Immediate since in this case Q = true and = . n > 1) Consider an LD-refutation for Q with c.a.s. . Let be a substitution such that 9 = mgu (Q; Q). Assume Q = A1; : : : ; An . Let R be the direct descendant of Q in , and let A B be the variant of a clause in P used to derive R from Q. 14
Assume, without loss of generality, that A B has no variables in common (neither with Q nor) with Q. Let = mgu (A; A ). Then R = (B; A ; : : : ; An ) and R has an LD-refutation 0 sux of with a c.a.s. such that Q = Q: (8) By Q = Q and by (8) it follows Q Q = Q = Q (9) 1
2
by which we get Moreover we have
A1 A1:
(10)
A A = A1 :
(11) Since A and A1 don't have variables in common, from (10) and (11) it follows that they are uni able. Consider = mgu (A; A1). Since is the most general uni er for A and A1 , and the variables of Q; Q and A B are disjoint, we have
and
Q Q
(12)
B B :
(13) B are disjoint, we have
Moreover, by (10), (11), and since the variables of Q; Q and A and
Q Q
(14)
B B:
(15)
Q = Q
(16)
B = B :
(17)
Q Q:
(18)
B B:
(19)
Let the most general substitution such that and By (16) and (14) we have
Analogously, by (17) and (15) we have
Consider now R = (B; A2 ; : : : ; An ) = (by (16) and (17)) (B; (A2; : : : ; An)) . By (18) and (19) we have R (B; A2; : : : ; An) = R. Therefore R and R are uni able. Let be the most general substitution such that
R = R: 15
(20)
By the inductive hypothesis, the goal R has an LD-refutation with c.a.s. . Consequently, the goal Q has a refutation with c.a.s. restricted to the variables of Q. Finally, observe that by (16) and (20) we have
A1 = A1 = A = A = A1 and
(A2; : : : ; An) = (A2; : : : ; An ) = (A2; : : : ; An ) from which we devive Q = Q and therefore, by (9), Q = , i.e., on the variables of Q the substitutions and coincide. The proof of (2) can be obtained by reversing the reasoning. 2
2
Note the analogy between Lemma 2.34(2) and the Lifting Lemma. Actually, Lemma 2.34(2) (which can obviously be generalized to arbitrary selection rules) is stronger than Lifting Lemma, because not only it ensures the existence of , but it also gives more precise information about the relation between , and (from the Lifting Lemma we would only know that Q Q). If P contains built-in relations, then NP could be non upward-closed or non downward-closed.
Example 2.35 Consider the program P : p(X) q(a)
.
var(X), q(X).
The goal p(x) has an LD-refutation with c.a.s. = fx=ag, but the goal p(x) has no refutations. Thus, NP is not upward-closed. Consider the program P : p(X) nonvar(X). The goal p(a) has an LD-refutation, but the goal p(x) has no refutations. Thus, NP is not downward-closed. We show now that if P is a pure Prolog program, then NP is isomorphic to the least Smodel SP , in the sense that there exist an abstraction operator from S-interpretations to -interpretations, and an abstraction operator from -interpretations to S-interpretations such that for every program P NP = (SP ); and SP = (NP ); Note: ; are abstraction operators in the sense that they do not depend upon P . (I.e. if SP1 = SP2 then (SP1 ) = (SP2 ) and if NP1 = NP2 then (NP1 ) = (NP2 ).)
De nition 2.36 The mappings Up from S-interpretations to -interpretations, and Kernel from -interpretations to S-interpretations are de ned as follows. Up (I ) = fhA; i j 9: hA; i 2 I and 90 = mgu (A; A): is the restriction of 0 to Ag; Kernel (I ) = fhp(x); i j x 2 Var and hp(x); i 2 Ig 16
2
Note that the de nition of Up and Kernel does not depend upon P . We prove that Up and Kernel are the intended and satisfying the property described above.
Proposition 2.37 If P is a pure Prolog program, then 1. NP = Up (SP ), and 2. SP = Kernel (NP ).
Proof. The equality SP = Kernel (NP ) follows immediately by the de nition of Kernel and by
(7). Therefore we have only to prove that NP = Up (Kernel (NP )). ) Let hA; i 2 NP . Assume A = p(x). Then by Proposition 2.33(2) there exist and 0 = mgu (A; p(x)) such that hp(x); i 2 NP (and therefore hp(x); i 2 Kernel (NP )), and is the restriction of 0 to A. The rest follows by Proposition 2.33(1). ) Let hA; i 2 Up (Kernel (NP )). Then there exists hp(x); i 2 Kernel (NP ) NP such that, for some , A = p(x) and 90 = mgu (A; p(x)) such that is the restriction of 0 to A. By Proposition 2.33(1), we conclude hA; i 2 NP . 2
2.6 Full abstraction of the -semantics
In the previous sections we have seen that NP coincides with the set of computational pairs hA; i s.t. there exists an LD-refutation of P [f Ag with c.a.s. and that the -semantics is and-compositional, in the sense that the truth value of a conjunction of atoms (possibly sharing variables) can be derived by the truth value of the atoms. We argue that a declarative semantics should provide such a compositional interpretation of conjuncts. We focus on conjuncts of the form
p1(x1); : : : ; pn (xn) where the pi (xi)'s are either elementary atoms or atoms of the form x = t, and x1; : : : ; xn are possibly not disjoint. Every conjunct can be equivalently transformed into a conjunct of this form. One might wonder whether it is possible to develop a declarative semantics for Prolog based on a simpler (i.e. more abstract) domain than the -domain, possibly encoding less information concerning the computational behavior of goals. One might for instance be interested in observing only the non-ground success set of a program P , de ned as: NGSS P = fA j
A has an LD-refutation with c.a.s. g
(which corresponds to the least C-model when P is a pure program (cf. Falaschi et al. [FLMP89])). This notion can be considered the most abstract interesting one, since, as we already have seen in the introduction, the ground success set is not suitable for programs containing built-in relations. So the question is: is it possible to give a declarative, hence and-compositional, characterization of NGSS P ?
17
If we want to have a declarative model which coincides with NGSS P , then the answer is no. In fact, it is easy to show that NGSS P is not and-compositional (in the sense that the NGSS P information about a goal in P cannot be derived from the NGSS P information about its atomic subgoals). An example of this fact will be given below. We have therefore to be content with a declarative semantics from which it is possible to derive NGSS P , but which contains more information than NGSS P necessary to achieve andcompositionality. The main result of this section is that the information encoded in NP is the least one which is necessary to model NGSS P and to provide an and-compositional notion of truth. In other words, NP is the fully abstract semantics with respect to and-compositionality and NGSS P , which means that NP is the simplest declarative semantics for Prolog programs with rst-order built-in's. We rst introduce the notions of semantical mappings associated to NP and NGSS P (which we will still denote by NP and NGSS P ). De nition 2.38 Let x1; : : : ; xn be sequences of variables, possibly not disjoint. Let p1(x1),: : : , pn(xn ) be either elementary atoms, or atoms of the form x = t. The mapping NP from conjunctions of elementary atoms to pairs of substitutions is de ned as follows: NP [ p1 (x1); : : : ; pn (xn )]] = fh; i j Dom () fx1g [ : : : [ fxng and (p1(x1); : : : ; pn (xn)) has an LD-refutation with c.a.s. g
The mapping NGSS P from conjunctions of elementary atoms to substitutions is de ned as follows: NGSS P [ p1 (x1); : : : ; pn (xn)]] = f j Dom () fx1g [ : : : [ fxng and (p1(x1); : : : ; pn(xn )) has an LD-refutation with c.a.s. g
2
The correspondence with the standard notions of NP , NGSS P is immediate, since NP [ p(x)]] = fh; i j hp(x); i 2 NP g and NGSS P [ p(x)]] = f j p(x) 2 NGSS P g: The semantics NGSS P is more abstract than NP , i.e. the information encoded in NGSS P can be retrieved from the one in NP (correctness of NP w.r.t. NGSS P ). This is shown by the following fact. Fact 1 NGSS P [ Q] = f j h; i 2 NP [ Q] g: On the other hand, it is not possible to retrieve the information encoded in NP from the one encoded in NGSS P , i.e NP and NGSS P are not equivalent . This because the mapping NP is and-compositional and NGSS P is not. In fact NP [ Q; R] can be derived from NP [ Q] and NP [ R] : h; i 2 NP [ Q; R] i 9: h; i 2 NP [ Q] ; and h; i 2 NP [ R] ; and (Q; R; ; ) is a good tuple. On the contrary, NGSS P is not and-compositional, as it is shown in the following example. 18
Example 2.39 Consider the program P : X=a . var(X)
p(X) q(X)
, X=a . We have NGSS P [ p(x)]] = NGSS P [ q(x)]] = ffx=agg, but NGSS P [ p(x); p(x)]] = ffx=agg whereas NGSS P [ q(x); q(x)]] = ;. Note that the key point of this counterexample is the presence of shared variables. The next theorem shows, however, that NP is the most abstract and-compositional semantics which is correct w.r.t. NGSS P .
Theorem 2.40 (Full Abstraction) If NP [ Q] 6= NP [ R] then there exist a conjunction A such that NGSS P [ A; Q] 6= NGSS P [ A; R] : Proof. Assume, without loss of generality, that there exist h; i 2 NP [ Q] n NP [ R] . Let = fx =t ; : : : ; xm=tmg Var (Q) = fy ; : : : ; yn g 1
1
1
De ne now:
A = x = t ; : : : ; x m = tm ; A = var (yn); : : : ; var (yn); A = (yk1 n == yl1 ); : : : ; (yk n == yl ); where ffk ; l g; : : : ; fkr ; lr gg are all possible combinations of two indexes in the set f1; : : : ; ng (r is the cardinality of such combinations: r = (n ? 1)n=2). Finally, de ne A = A ;A ;A : By the de nition of A we derive immediately that 2 NGSS P [ A; Q] : We show now that 62 NGSS P [ A; R] : Assume, by contradiction, that there exists 2 NGSS P [ A; R] such that 1
1
1
2
3
1
r
r
1
1
2
3
= : Then, there exist ; such that
(A; R) has an LD-refutation with c.a.s.
= :
(21) and (22)
We show that in this case R has an LD-refutation with c.a.s. , i.e. h; i 2 NP (R), against the hypothesis. Consider an LD-refutation for (A; R) with a c.a.s. which satis es (22). Then there exists ; such that
A has an LD-refutation with c.a.s. ; A has an LD-refutation (with c.a.s. ); A has an LD-refutation (with c.a.s. ); R has an LD-refutation with c.a.s. ; 1
2
3
19
(23) (24) (25) (26)
and
= :
(27) Since is a mgu for fx1 = t1; : : : ; xn = tng, by (23) we have A1 A1 . Moreover, by (24), (25) and the de nition of [ var] and [ n ==]] we also have A1 A1, and therefore, since the domains of and are restricted to A1 , we can derive (up to renaming) = : (28) By (26), we have that R has an LD-refutation with c.a.s. furthermore, by (28), (27), (22), and (21), R = R = R = R = R i.e. (since both the domains of and are restricted to Q), = . 2
3 Termination of Prolog Programs In this section we show that the -semantics is helpful when studying termination of Prolog programs. The presence of built-in's allows us to better control the execution of the programs and consequently it is not surprising that most \natural" programs with built-in's terminate for all goals. This motivates the following de nition. De nition 3.1 We say that a Prolog program P strongly terminates if for all goals G all LDderivations of P [ fGg are nite. 2 Traditionally, the main concept used to prove termination of Prolog programs is that of a level mapping. Level mapping was originally de ned to be a function from ground atoms to natural numbers (see Bezem [Bez89], Cavedon [Cav89], Apt and Pedreschi [AP90]). In our case it is more natural to consider level mappings de ned on non-ground atoms. Such level mappings were already considered in Bossi, Cocco and Fabris [BCF91] and subsequently in Plumer [Plu91] but they were applied only to prove termination of pure Prolog programs. In our case it is convenient to allow a level mapping yielding values in a well-founded ordering. De nition 3.2 A level mapping j j is a function from atoms to a well-founded ordering such that jAj = jB j if A and B are variants. 2 The following auxiliary notion will be used below. De nition 3.3 C 0 is called a head instance of a clause C if C 0 = C for some substitution that instantiates only variables of C that appear in its head. 2 First we provide a method for proving (strong) termination of Prolog programs in homogeneous form. Our key concept is the following one. De nition 3.4 A homogeneous Prolog program P is called acceptable w.r.t. a level mapping j j and a -model I of P if for all head instances A B1 ; : : : ; Bn of a clause of P the following implication holds for i 2 [1; n]: if I j= hB1 ; : : : ; Bi?1 ; i then jAj > jBi j. P is called acceptable if it is acceptable w.r.t. some level mapping and a -model of P . 2 20
The relevance of the notion of acceptability is clari ed by the following theorem.
Theorem 3.5 (Soundness III) Let P be a homogeneous Prolog program. If P is acceptable then it strongly terminates.
The following notion will be useful in the proof. De nition 3.6 Consider an LD-derivation . Let G be a goal in . Let k be the minimum length of a goal in the sux of starting at G and let H be the rst goal in this sux with length k. We call H the shortest goal of under G. 2 Proof of Theorem 3.5. Suppose by contradiction that there exists an in nite LD-derivation of P [ fGg. Call it . Denote G by H0. We rst de ne two in nite sequences G1; G2; : : : and H1; H2 ; : : : of goals of by the following formula for j 1: Gj is the shortest goal of under Hj?1 , Hj is the direct descendant of Gj in . Fix j 1. Let A B1 ; : : : ; Bn be the input clause and the mgu used to obtain Hj from Gj . By the choice of Gj and Hj we have l(Gj ) l(Hj ), so n 1. Gj is of the form C1 ; : : : ; Ck where k 1 and Hj is of the form (B1; : : : ; Bn ; C2 ; : : : ; Ck ). By de nition, no goal of under Gj is of length less than k, so Gj+1 is of the form (Bi ; : : : ; Bn ; C2; : : : ; Ck ) for some , where i 2 [1; n ? 1]. This means that there exists an LD-refutation of P [f (B1 ; : : : ; Bi?1 )g with c.a.s. . This refutation is obtained by deleting from all goals of between and including Hj and Gj+1 all occurrences of the instantiated versions of Bi ; : : : ; Bn ; C2 ; : : : ; Cn . By the Soundness Theorem 2.11 we have I j= h(B1 ; : : : ; Bi?1 ); i: By the acceptability of P
jAj > jBi j:
(29)
By Assumption 2.10 the mgu used to obtain Hj+1 from Gj+1 does not bind the variables of the selected atom Bi . So Bi = Bi and consequently Thus assuming j > 1, we have
jBi j = jBi j:
(30)
jC j = jC j;
(31)
jC j = jAj:
(32)
1
1
(C1 is the rst atom of Gj and Bi is the rst atom of Gj+1 ). But uni es A and C1 , so 1
By (29), (31), and (32) we conclude, assuming j > 1,
jC j > jBi j: 1
Thus applying the level mapping j j to the rst atoms of the goals G2; G3; : : : we obtain an in nite descending sequence of elements of a well-founded ordering. This yields a contradiction.
2
We now prove a converse of the Soundness III Theorem 3.5. For a Prolog program P that strongly terminates and a goal G, denote by nodesP (G) the number of nodes in the LD-tree of P [ fGg. The following lemma summarizes the relevant properties of nodesP (G). 21
Lemma 3.7 (LD-tree) Let P be a Prolog program that strongly terminates. Then
(i) nodesP (G) = nodesP (H ) if G and H are variants, (ii) nodesP (H ) < nodesP (G) for all non-root nodes H in the LD-tree of P [ fGg, (iii) nodesP (H ) nodesP (G) for all pre xes H of G.
Proof. (i) By a simple generalization of the Variant Lemma 2.8 of Apt [Apt90] to the class of Prolog programs, an isomorphism between the LD-trees of P [ fGg and P [ fH g can be established. (ii), (iii) Immediate by the de nition.
2
We are now in position to prove the desired result.
Theorem 3.8 (Completeness III) Let P be a homogeneous Prolog program. Suppose that P strongly terminates. Then P is acceptable.
Proof. Put for an atom A
jAj = nodesP ( A):
By Lemma 3.7 (i) j j is a level mapping. We now prove that P is acceptable w.r.t. j j and NP , the least -model of P . To this end consider a clause C with head A0 and its head instance C = A B1 ; : : : ; Bn where Dom () Var (A0). Let us assume that C is disjoint with C . Then A is disjoint with A0, A = A0 and Dom () Var (A0), so is idempotent and A = A. Thus uni es A and A0 and it is easy to see that in fact is an mgu of A and A0. Thus B1 ; : : : ; Bn is a resolvent of A with the input clause C . By Lemma 3.7 (ii) nodesP ( A) > nodesP ( B1 ; : : : ; Bn ): (33) This conclusion was reached under the assumption that C is disjoint with C but Lemma 3.7 (i) allows us to dispense us with this assumption. Suppose now that NP j= hB1; : : : ; Bi?1 ; i for some i 2 [1; n] and substitution . Then by the Completeness Theorem 2.24 there exists an LD-refutation of B1 ; : : : ; Bi?1 with c.a.s. , so (Bi ; : : : ; Bn ) is a node in the LD-tree of P [ f B1 ; : : : ; Bn g. By Lemma 3.7 (ii) nodesP ( B1 ; : : : ; Bn ) nodesP ( (Bi ; : : : ; Bn )) (34) and by Lemma 3.7 (iii) nodesP ( (Bi ; : : : ; Bn )) nodesP ( Bi ): (35) By (33), (34), and (35) we now conclude nodesP ( A) > nodesP ( Bi ); i.e. jAj > jBi j. This shows that P is acceptable. 2 Thus we proved an equivalence between the notions of acceptability and strong termination for homogeneous Prolog programs. Now, every Prolog program can be easily transformed into a homogeneous program with the same termination behaviour. 22
Theorem 3.9 Let P be a Prolog program and G a goal. Then the LD-tree of P [ fGg is nite i the LD-tree of Hom (P ) [ fGg is nite. The following lemma is useful.
Lemma 3.10 Let G be a goal and C a clause. G and C have LD-resolvent
Q with mgu i G and Hom (C ) have resolvent x1 = t1; : : : ; xn = tn ; Q with mgu and is the c.a.s. of x1 = t1; : : : ; xn = tn, where t1 ; : : : ; tn (resp. x1; : : : ; xn ) are the arguments of the head of C (resp. Hom (C )). Proof. Let G =
p(s1 ; : : : ; sn ); A and C = p(t1; : : : ; tn) B. Let Hom (C ) = p(x1; : : : ; xn ) x1 = t1; : : : ; xn = tn ; B. Then by Assumption 2.10 (s1 = t1 ; : : : ; sn = tn; A; B) is the resolvent of Hom (C ) and G and (A; B) is the resolvent of G and C with = mgu(p(s1; : : : ; sn ); p(t1; : : : ; tn )). By Lemma 5.1 mgu((s1; : : : ; sn ); (t1; : : : ; tn )) is the c.a.s. of (s1 = t1; : : : ; sn = tn ). 2
Proof of Theorem 3.9
The LD-trees (in P and in Hom (P )) are nitely branching, so by Konig Lemma it suces to show that G has an in nite derivation in P i G has an in nite derivation in Hom (P ). The result follows by Lemma 3.10. 2
Corollary 3.11 Let P be a Prolog program. Then P strongly terminates i Hom (P ) strongly
2
terminates.
This allows us to reason about termination of Prolog programs by transforming them rst to a homogeneous form and then using the notion of acceptability. We oer now an alternative, direct way of reasoning about termination. To this end the following auxiliary notion will be needed.
De nition 3.12 Let P be a Prolog program and j j a level mapping. An atom A is called stable w.r.t. j j if jAj jAj for every mgu of A and a disjoint with A variant of a head of a non-unit 2
clause of P .
Note that atoms with built-in relations are automatically stable w.r.t. every level mapping. The following is a generalization of De nition 3.4 to arbitrary Prolog programs.
De nition 3.13 A Prolog program P is called acceptable w.r.t. a level mapping j j and a model I of P if for all head instances A B1; : : : ; Bn of a clause of P the following implication holds for i 2 [1; n]: if I j= hB1 ; : : : ; Bi?1 ; i then (i) jAj > jBi j, (ii) Bi is stable w.r.t. j j. P is called acceptable if it is acceptable w.r.t. some level mapping and a -model of P . 2 It is important to note the following.
Lemma 3.14 Let P be a homogeneous Prolog program and j j a level mapping. Then every atom is stable w.r.t. j j. 23
Proof. Suppose an atom A uni es with a disjoint with A variant B of a head of a non-unit clause of P . B is an elementary atom, so A is an instance of B , say A = B with such that Dom () = Var (B ). Then A = A, so uni es A and B . Let now be an mgu of A and B . Then A is more general than A, i.e. A is more general than A. Also A is more general than A, so A and A are variants and consequently jAj = jAj. 2
Corollary 3.15 For homogeneous programs both de nitions of acceptability coincide.
2
The following theorem is a generalization of the Soundness III Theorem 3.5.
Theorem 3.16 (Soundness IV) Let P be a Prolog program. Suppose P is acceptable. Then P strongly terminates.
Proof. The proof is completely analogous to that of the Soundness III Theorem 3.5. The only
dierence is that instead of (30) we can now only claim by condition (ii) of acceptability
jBi j jBi j; so assuming j > 1 we now only have
jC j jC j: 1
1
instead of (31). However, this weaker conclusion is still sucient to yield the same contradiction as in the proof of Theorem 3.5. 2 Ideally, we would like to prove the converse of the Soundness IV Theorem 3.16, that is Prolog programs that strongly terminate are acceptable. Unfortunately this is not the case.
Theorem 3.17 There exists a Prolog program P that strongly terminates but is not acceptable. Proof. Consider the following program P : p(f(X)) nonvar(X), p(X). p(f(f(X))) nonvar(X), p(X).
It is easy to see that all LD-derivations of P terminate. In fact, in every LD-derivation of P a goal of the form p(y) leads to a failure in two steps and a goal of the form p(f n(y)), where n 1, leads to a goal of the form p(f k (y)), where k < n, in two steps. Suppose now that P is acceptable w.r.t. some level mapping j j and a -model I . Then due to condition (i) jp(f (f (Y )))j > jp(f (Y ))j because nonvar(f (Y )) holds. Also p(f (Y )) is stable w.r.t. j j, so
jp(f (Y ))j jp(f (f (X )))j 2
which gives a contradiction.
It may seem disappointing that we opted here for a notion of acceptability that didn't allow us to prove its equivalence with strong termination for all Prolog programs. Clearly, it is possible to characterize strong termination by means of well-founded relations for all Prolog programs. 24
To this end it suces to use the concept of a level mapping de ned on goals, with the condition that jH j < jGj whenever H is a direct descendant of G in an LD-derivation. However, such a characterization of strong termination is hardly of any use when proving termination because it requires an analysis of arbitrary goals. In contrast, the de nition of acceptability refers only to the program clauses and calls for the use of a level mapping de ned only on atoms, so it is simpler to use. On the other hand, the introduction of homogeneous programs allows us to draw the following conclusion.
Theorem 3.18 Let P be a Prolog program. Then P strongly terminates i Hom (P ) is accept-
able.
Proof. By the Soundness III Theorem 3.5 and Completeness III Theorem 3.8 applied to Hom (P ), and Corollary 3.11. 2
4 Applications We illustrate the use of the results established in the previous section to prove strong termination of some Prolog programs. We start by considering the program list given in Section 1. Then we show how a relation that strongly terminates can be treated as a built-in relation when proving strong termination of a program depending on this relation. This allows us to prove strong termination in a modular way. We illustrate this method by proving strong termination of two well-known Prolog programs. First, we de ne by structural induction the function j j on terms by putting:
jxj = 0 if x is a variable, jf (x ; : : :; xn )j = 0 if f 6= [ : j : ]; j[xjxs]j = jxsj + 1: It is useful to note that for a list xs, jxsj equals its length. This function will be used in the 1
examples below.
List
Consider the program list from Section 1: (l1) (l2)
list([]) . list([X | Xs]) nonvar(Xs), list(Xs).
To prove that list strongly terminates we show that it is acceptable. We de ne a level mapping
j j by putting
jlist(xs)j = jxsj jnonvar(xs)j = 0: Clearly, jAj = jB j if A and B are variants, so j j is indeed a level mapping. Next, we take
the -base P as the -model of list.
25
Theorem 4.1 list is acceptable w.r.t. j j and P . Proof. Consider a head instance C = A B ; B of (l ). It is of the form list([xjxs]) nonvar(xs); list(xs): Claim 1 jAj > jB j. Proof. Note that jlist([xjxs])j > 0 = jnonvar(xs)j: 1
2
2
1
2
Suppose now p j= hB1 ; i. Then = and B1 = nonvar(xs) with xs 62 Var.
Claim 2 jAj > jB j. Proof. Note that jAj = jlist([xjxs])j = j[xjxs]j > jxsj = jlist(xs)j = jB j: 2 Claim 3 B is stable w.r.t. j j. Proof. Suppose B uni es with a variant list([x0jxs0 ]) of the head of the clause (l ). Since xs 62 V ar, B is an instance of list([x0jxs0 ]). As in the proof of Lemma 3.14 this implies that for any mgu of B and list([x0jxs0 ]) we have jB j = jB j. 2 2
2
2
2
2
2
2
2
2
2
Modularity
In the proof of Theorem 3.5 the level mapping of built-in relations is not used. This is due to the fact that the built-in relations always terminate and never occur in the head of a clause. So we can assume that jAj = 0 if A is a built-in atom. This observation provides an idea of how to prove the strong termination of a Prolog program in a modular way. Before formalizing this idea we show how the relation list previously de ned can be treated as a built-in in the proof of the strong termination of a Prolog program.
Example 4.2 Consider the following program APPEND: (a1) (a2)
a([], Ys, Ys) list(Ys). a([X | Xs], Ys, [X | Zs]) nonvar(Xs), a(Xs, Ys, Zs).
augmented by the clauses (l1) and (l2) de ning the list program. To prove that APPEND strongly terminates we consider the program append formed by clauses (a1) and (a2). In append the relation list does not occur in the head of any clause. We already proved that list strongly terminates. Thus the relation list can be treated as a built-in with the semantics given by an arbitrary -model I0 of the program list and the level mapping always equal to 0. Now to show that append strongly terminates we prove that it is acceptable. We choose the following level mapping:
jappend(x; y; z)j = jxj; jAj = 0 if A is a built-in or list (xs): 26
Next, we de ne a -interpretation for the relation a by putting
I = fha(xs; ys; zs); i j jxsj + jysj = jzsjg:
Lemma 4.3 I [ I is a -model of append. Proof. Let A = a(r; s; t) and 0
a([ ]; Ys0 ; Ys0) list(Ys0 )
be a variant of (a1) disjoint with A. Suppose
= mgu(A; a([ ]; Ys0; Ys0 )) exists and assume
I [ I0 j= hlist(Ys0 ); i;
with satisfying the restriction of De nition 2.28. We have to show that
I [ I0 j= hA; () j Ai: We have that Then and so
r = [ ]; s = t = Ys0 :
jrj + jsj = jtj jrj + jsj = jtj:
Hence I [ I0 j= hA; () j Ai holds. Let now a([X 0jXs0 ]; Ys0; [X 0 jZs0 ]) nonvar(Xs0 ); a(Xs0 ; Ys0 ; Zs0 ) be a variant of (a2) disjoint with A. Suppose
= mgu(A; a([X 0jXs0 ]; Ys0 ; [X 0 jZs0 ])) exists and assume
I [ I0 j= h(nonvar(Xs0 ); a(Xs0 ; Ys0; Zs0 )); i;
with satisfying the restriction of De nition 2.28. We have to show that
I [ I0 j= hA; () j Ai: Clearly (nonvar(Xs0 ); a(Xs0 ; Ys0; Zs0 ); ; ) is a good tuple. Then, by the semantics of nonvar, it follows that I [ I0 j= h(nonvar(Xs0 ); a(Xs0 ; Ys0 ; Zs0 )); i i I [ I0 j= ha(Xs0 ; Ys0 ; Zs0 ); i; with Xs0 62 V ar: Then we have jXs0 j + jYs0j = jZs0 j 27
and, by
r = [X 0jXs0 ]; s = Ys and t = [X 0jZs0 ]
it follows that
jrj + jsj = jtj:
Hence I [ I0 j= hA; () j Ai holds. This concludes the proof that I [ I0 is a -model of append.
2
Theorem 4.4 append is acceptable w.r.t. j j and I [ I . Proof. Analogous to that of Theorem 4.1, due to the similarity between clauses (a ) and (l ). 0
2
2
2
We can now formulate our modular approach to termination.
De nition 4.5 Let P1 and P2 be two Prolog programs. We say that P2 extends P1 , and write P1 < P2 , if (i) P1 and P2 de ne dierent relations, (ii) no relation de ned in P2 occurs in P1 . 2 Informally, P2 extends P1 if P2 de nes new relations, possibly using the relations de ned already in P1. For example the program append extends the program list. The following theorem formalizes the idea used to prove termination of the APPEND program.
Theorem 4.6 (Modularity) Suppose P extends P . Assume that 2
1
(i) P1 is acceptable, (ii) P2 is acceptable w.r.t. a -model I of P1 [ P2 and a level mapping j j such that jAj = 0 if A contains a relation de ned in P1 . Then P1 [ P2 strongly terminates.
Proof. P extends P . Thus P [ P strongly terminates i P strongly terminates and P 2
1
1
2
1
strongly terminates when the relations de ned in P1 are treated as built-in's de ned by [ p ] = fhA; i j A contains p and there exists an LD-refutation of P1 [ f
2
Ag with c.a.s. g.
Now, by (i) and the Soundness III Theorem 3.5 P1 strongly terminates. To deal with the other conjunct consider NP1[P2 , the least -model of P1 [ P2. By (ii) and Corollary 2.14 P2 is acceptable w.r.t. NP1[P2 and the level mapping j j. Moreover, by Corollary 2.25 and the fact that P2 extends P1 we have for all atoms A containing a relation p de ned in P1
NP1[P2 j= hA; i i hA; i 2 [ p ] : Thus by the Soundness III Theorem 3.5 P2 strongly terminates when the relations de ned in P1 are treated as built-in's de ned as above. This concludes the proof of the theorem.
2
We illustrate the use of this theorem in the example below. 28
Uni cation
Consider the program UNIFY (for uni cation without occurs check) from Sterling and Shapiro [page 150][SS86]. In this program several built-in's, namely var , nonvar , =, constant , compound , functor , > are used. The meaning of them was already given in Section 2. Additionally, the function "?" (minus) is used on terms. Its meaning is implicitly referred within the description of the meaning of ":=". For instance, hx := 3 ? 1; fx=2gi 2 [ :=]]. The program UNIFY consists of the following clauses. (u1) (u2) (u3) (u4) (u5) (tu) (uar1) (uar2) (ua)
unify(X,Y) var(X), var(Y), X = Y. unify(X,Y) var(X), nonvar(Y), X = Y. unify(X,Y) var(Y), nonvar(X), Y = X. unify(X,Y) nonvar(X), nonvar(Y), constant(X), constant(Y), X = Y. unify(X,Y) nonvar(X), nonvar(Y), compound(X), compound(Y), term-unify(X,Y). term-unify(X,Y) functor(X,F,N), functor(Y,F,N), unify-args(N,X,Y). unify-args(N,X,Y) N > 0, unify-arg(N,X,Y), N1 := N-1, unify-args(N1,X,Y). unify-args(0,X,Y). unify-arg(N,X,Y) arg(N,X,ArgX), arg(N,Y,ArgY), unify(ArgX,ArgY).
We assume that UNIFY operates on the domain of natural numbers over which the built-in relation > and the function ?, both written in in x notation, are de ned. In Pieramico [Pie91] it was proved that UNIFY terminates for ground goals by showing that the program obtained by deleting all built-in relations is acceptable in the sense of Apt and Pedreschi [AP90]. We prove here a stronger statement, namely that UNIFY strongly terminates by showing that it is acceptable in the sense of De nition 3.13. For the subsequent analysis it is important to understand how this program operates. Intuitively, the goal unify(s,t) yelds an mgu of s and t as a computed answer substitution if s and t unify, and otherwise it fails. It is evaluated as follows. If either s or t is a variable, then the built-in relation = is called (clauses (u1) - (u3)). It assigns to the term out of s, t which is a variable the other term. If both s and t are variables (clause (u1)) then s is chosen. If neither s nor t is a variable, but both are constants, then it is tested - again by means of = - whether they are equal (clause (u4 )). The case when both s and t are compound terms is handled in clause (u5) by calling the relation term-unify. This relation is de ned by clause (tu). The goal term-unify(s,t) is evaluated by rst identifying the form of s and t by means of the built-in relation functor. If for some function symbol f and n 0, the term s is of the form f (t1; : : : ; tn ), then the relation unify-args is called. This relation is de ned by clauses (uar1) and (uar2). The goal unify-args(n,s,t) succeeds if the sequence of the rst n arguments of s can be uni ed with the sequence of the rst n arguments of t. When n > 0, clause (uar1) is used 29
and these arguments are uni ed pairwise starting with the last pair. This last pair is dealt with by calling the relation unify-arg which is de ned by clause (ua). The goal unify-arg(n,s,t) is evaluated by rst extracting the n-th arguments of s and t by means of the built-in relation arg, and then calling unify recursively on these arguments. If this call succeeds, the produced c.a.s. modi es s and t, and the recursive call of unify-args in clause (uar1) operates on this modi ed pair of s and t. Finally, when n = 0, unify-args(n,s,t) succeeds immediately (clause (uar2)). It is clear from this description what is the intended meaning of the de ned relations unify, term-unify, unify-args and unify-arg. In the proof of the strong termination of UNIFY only partial information about the meaning of these relations is needed. This information is captured in the -model I used below. The following de nition is useful. De nition 4.7 Let I be a -interpretation. We say that I is good if for all hA; i 2 I we have Ran () Var (A).
2
In good interpretations the truth of a conjunct (see De nition 2.7) is easier to be checked, as the condition that (A; B; ; ) is a good tuple is not needed. Indeed this condition holds for atoms de ned in the program if the interpretation is good and for built-in atoms it follows by De nition 2.1 and the Good Tuple Lemma 2.5. We de ne the following -interpretation of UNIFY. For brevity we write Var (s; t) instead of Var (s) [ Var (t). The following assertions are used: Inv(s; t; ) = (V ar(s; t) = V ar(s; t) ) (nodes(s) + nodes(t) = nodes(s) + nodes(t))); St(s; t; ) = (V ar(s) = V ar(t); nodes(s) = nodes(t)): I = fhunify(s; t); i j Ran () V ar(s; t); St(s; t; ); Inv(s; t; )g [ fhterm ? unify(s; t); i j Ran () V ar(s; t); St(s; t; ); Inv(s; t; )g [ fhunify ? args(n; s; t); i j n = 0 or (n > 0; Ran () V ar(s; t); St(si ; ti ; ) for i 2 [1; n]; Inv(s; t; ))g [ fhunify ? arg(n; s; t); i j n is a natural number; Ran () V ar(s; t); St(sn ; tn ; ); Inv(s; t; )g:
Lemma 4.8 I is a -model of UNIFY. Proof. The condition Ran () V ar(s; t) that occurs in I implies that I is good. Consider clauses (u ) - (u ). hs = t; i 2 [ =]] i = mgu(s; t), with relevant. Then Ran () V ar(s; t), St(s; t; ) and Inv(s; t; ) hold. This implies that I is a -model of (u ) 1
4
1
(u4). I is a -model of (u5), since the relations unify and term ? unify are equivalent w.r.t. I . I is a -model of (uar1), since if St(sn ; tn ; ) and St(si ; ti ; ) hold for all i 2 [1; n ? 1] then St(si ; ti ; ) holds for all i 2 [1; n]. I is a -model of (uar2). In fact for an atom A = unify ? args(n; s; t) and a variant unify ? args(0; X 0; Y 0 ) of (uar2) s.t. = mgu(A; unify ? args(0; X 0; Y 0)) exists, we have n = 0. Consider now the clause (tu). Let
A = term ? unify(x; y) 30
and let
term ? unify(X 0; Y 0 ) functor(X 0 ; F 0; N 0 ); functor(Y 0 ; F 0; N 0 ); unify ? args(N 0; X 0; Y 0 ) a variant of (tu) disjoint with A. Suppose
= mgu(A; term ? unify(X 0; Y 0 )) exits and assume
I j= h(functor(X 0 ; F 0; N 0 ); functor(Y 0 ; F 0; N 0 ); unify ? args(N 0; X 0 ; Y 0 )); i: We need to show that I j= hA; () j Ai: F 0 and N 0 are in V ar. Then by the semantics of functor we have that
I j= h(functor(X 0 ; F 0; N 0 ); functor(Y 0; F 0; N 0 ); unify ? args(N 0; X 0 ; Y 0)); i implies
N 0 = a(x); F 0 = funct(x) = funct(y); I j= hunify ? args(N 0; X 0 ; Y 0 ); i; = and () j A = : But for compound terms x and y I j= hterm ? unify(x; y); i i
I j= hunify ? args(a(x); x; y); i: Then I j= hterm ? unify(x; y); () j Ai. It remains to check that I is a model of (ua). Let A = unify ? arg(n; x; y) and let
unify ? arg(N 0; X 0; Y 0 ) arg(N 0; X 0 ; ArgX 0); arg(N 0; Y 0; ArgY 0 ); unify(ArgX 0; argY 0 ) a variant of (ua) disjoint with A. Suppose
= mgu(A; unify ? arg(N 0; X 0 ; Y 0)) exits and assume
I j= h(arg(N 0; X 0 ; ArgX 0); arg(N 0; Y 0; ArgY 0 ); unify(ArgX 0; argY 0 )); i: We need to show that
I j= hA; () j Ai:
ArgX 0 and ArgY 0 are in V ar. Then by the semantics of arg we have that I j= h(arg(N 0; X 0 ; ArgX 0); arg(N 0; Y 0 ; ArgY 0 ); unify(ArgX 0; argY 0 )); i 31
implies
N 0 = n; with n > 0; ArgX 0 = xn ; ArgY 0 = yn ; I j= hunify(ArgX 0; argY 0 ); i;
But for compound terms x and y
= and () j A = :
I j= hunify ? args(n; x; y); i
i
I j= hunify(xn; yn ); i: Then I j= hunify ? arg(x; y); () j Ai. This concludes the proof that I is a -model of UNIFY.
2
Next, we de ne a level mapping j j. To this end we use the lexicographic ordering < de ned on triples of natural numbers. In this ordering
hn ; n ; n i < hm ; m ; m i 1
i We put
2
3
1
2
3
(n1 < m1) or (n1 = m1 ^ n2 < m2) or (n1 = m1 ^ n2 = m2 ^ n3 < m3):
junify(s; t)j = h card(V ar(s; t)); nodes(s) + nodes(t); 1 i; jterm ? unify(s; t)j = h card(V ar(s; t)); nodes(s) + nodes(t); 0 i; junify ? args(n; s; t)j = h card(V ar(s; t)); f (n; s; t); 3 i; junify ? arg(n; s; t)j = h card(V ar(s; t)); nodes(sn ) + nodes(tn ); 2 i; jAj = h 0; 0; 0 i if A built-in,
where card(S ) indicates the cardinality of the set S and f (n; s; t) denotes the sum of the number of nodes of the i-th component of s and t for i 2 [1; n], that is
f (n; s; t) =
Xn (nodes(s ) + nodes(t )): i
i=1
i
We can now prove the desired result. Theorem 4.9 UNIFY is acceptable w.r.t. j j and I . Proof. Notice that any atom in the body of an instance of a clause in UNIFY satis es property (ii) of De nition 3.4, since each clause with nonempty body is in homogeneous form. Any instance of (u1), (u2), (u3 ), (u4) satis es the appropriate requirement since
junify(s; t)j > h0; 0; 0i:
Consider now a head instance C = A B1 ; B2 ; B3 ; B4 ; B5 : of (u5 ). C is of the form unify(s; t) nonvar(s); nonvar(t); compound(s); compound(y); term ? unify(s; t): We now prove two claims which obviously imply that C satis es the appropriate requirement. 32
Claim 1 jAj > jBi j for i = 1; : : : ; 4. Proof. Note that jAj > h0; 0; 0i = jBi j for i = 1; : : : ; 4.
2
Claim 2 Suppose that I j= hB ; B ; B ; B ; i. Then jAj > jB j. Proof. By the semantics of the built-in's nonvar and compound it follows that s = s, t = t. 1
So
2
3
4
4
junify(s; t)j = h card(V ar(s; t)); nodes(s) + nodes(t); 1 i > h card(V ar(s; t)); nodes(s) + nodes(t); 0 i = jterm ? unify(s; t)j: Consider a head instance C = A
B1; B2 ; B3 : of (tu). C is of the form
2
term ? unify(s; t) functor(s; F; N ); functor(t; F; N ); unify ? args(N; s; t): We now prove two claims which obviously imply that C satis es the appropriate requirement.
Claim 1 jAj > jBi j for i = 1; 2. Proof. Note that jAj > 0 = jBi j for i = 1; 2.
2
Claim 2 Suppose that I j= hB ; B ; i. Then jAj > jB j. Proof. By assumption s = s, t = t and n = a(s) = a(t). Notice that 1
2
3
nodes(s) + nodes(t) > f (n; s; t): So
jterm ? unify(s; t)j > junify ? args(n; s; t)j: Consider a head instance C = A
2
B1; B2 ; B3 ; B4 : of (uar1). C is of the form
unify ? args(n; s; t) n > 0; unify ? arg(n; s; t); N 1 := n ? 1; unify ? args(N 1; s; t): We now prove three claims which obviously imply that C satis es the appropriate requirement.
Claim 1 jAj > jBi j for i = 1; 3. Proof. Note that jAj > h0; 0; 0i = jBi j for i = 1; 3. Claim 2 Suppose that I j= hB ; i. Then jAj > jB j. 1
2
33
2
Proof. By the semantics of the built-in > it follows that s = s, t = t, n > 0. Notice that f (n; s; t) nodes(sn ) + nodes(tn ): So
junify ? args(n; s; t)j > junify ? arg(n; s; t)j:
2
Claim 3 Suppose that I j= hB ; B ; B ; i. Then jAj > jB j. Proof. By the semantics of the built-in's >, := and of the relation unify ? arg it follows that n > 0, a(s) n > 0, N 1 = n ? 1, V ar(s; t) V ar(s; t). If V ar(s; t) V ar(s; t) then 1
2
3
4
card(V ar(s; t)) > card(V ar(s; t)): If V ar(s; t) = V ar(s; t) then, by nodes(s) + nodes(t) = nodes(s) + nodes(t) it follows that f (n; s; t) > f (n ? 1; s; t): So in both cases we have
junify ? args(n; s; t)j > junify ? args(n ? 1; s; t)j: 2 Consider a head instance C = A
B1; B2 ; B3 of (ua). C is of the form
unify ? arg(n; s; t) arg(n; s; ArgX ); arg(n; t; ArgY ); unify(ArgX; ArgY ): We now prove two claims which obviously imply that C satis es the appropriate requirement.
Claim 1 jAj > jBi j for i = 1; 2. Proof. Note that jAj > h0; 0; 0i = jBi j for i = 1; 2. 2 Claim 2 Suppose that I j= hB ; B ; i. Then jAj > jB j. Proof. Since in the clause C the third argument of arg is a variable, then from the semantics of arg it follows s = s, t = t, n > 0, a(s) n > 0, ArgX = sn , ArgY = tn. 1
So
2
3
junify ? arg(n; s; t)j > junify(sn; tn )j:
2
Consider now the program UNIFYoc for the uni cation with occur check (see Sterling and Shapiro [page 152][SS86]). Let UNIFY' be the program obtained by UNIFY introducing the atom not-occurs-in(X,Y) before the last atom in the bodies of clauses (u2) and (u3 ). Then UNIFYoc is the union of UNIFY' with the following program not-occur de ning the relation not-occurs-in=2: 34
(noc1 ) (noc2 ) (noc3 ) (no1 ) (no2 )
not-occurs-in(X,Y) var(X), X == Y. not-occurs-in(X,Y) nonvar(Y), constant(Y). not-occurs-in(X,Y) nonvar(Y), compound(Y), functor(Y,F,N), not-occurs-in(N,X,Y). not-occurs-in(N,X,Y) N > 0, arg(N,Y,Arg), not-occurs-in(X,Arg), N1 := N-1, not-occurs-in(N1,X,Y). not-occurs-in(0,X,Y).
n
By the Modularity Theorem 4.6 and the Soundness IV Theorem 3.5 to prove that UNIFYoc strongly terminates it suces to prove that not-occur is acceptable and then prove that UNIFY' is acceptable w.r.t a -model of UNIFYoc and a level mapping j j such that jnot ? occurs(s; t)j = 0 for all s, t. To prove that not-occur is acceptable we de ne an appropriate level mapping with
jnot ? occurs ? in(x; y)j = nodes(y) + 1; P jnot ? occurs ? in(n; x; y)j = nodes(y) ? ia yn nodes(yi ) if n > 0; jnot ? occurs ? in(0; x; y)j = 0; jAj = 0 if A is a built-in. ( ) = +1
Next, we de ne a -interpretation of not-occur by putting
I0 =
fhnot ? occurs ? in(s; t); ig [ fhnot ? occurs ? in(n; s; t); i j 0 n a(t)g:
Lemma 4.10 I 0 is a -model of not-occur . Proof. Notice that 1 n ? 1 a(t) implies 0 n a(t). This implies that I 0 is a -model of
not-occur
2
.
Lemma 4.11 not-occur is acceptable w.r.t. j j and I 0 . Proof. Notice that condition (ii) of De nition 3.4 is satis ed since not-occur is stable. Any instance of (noc1 ) and (noc2 ) satis es the appropriate requirement since
jnot ? occurs ? in(s; t)j > 0: Consider an instance C = A B1; B2 ; B3 ; B4 of (noc3 ). C is of the form not ? occurs ? in(s; t) nonvar(t); compound(t); functor(t; F; N ); not ? occurs ? in(N; s; t). We now prove two claims which obviously imply that C satis es the appropriate requirement.
Claim 1 jAj > jBi j for i = 1; 2; 3. 35
Proof. Notice that jAj > 0 = jBi j for i = 1; 2; 3.
2
Claim 2 Suppose that I 0 j= hB ; B ; B ; i. Then jAj > jB j. Proof. By the semantics of the buil-in's nonvar, compound and functor we have s = s, t = t 1
2
3
4
and N = a(t). So
jnot ? occurs ? in(s; t)j = nodes(t) + 1 > nodes(t) = jnot ? occurs ? in(N; s; t)j: 2
Consider now an instance C = A B1 ; B2 ; B3 ; B4 ; B5 of (no1 ). C is of the form not ? occurs ? in(n; s; t) n > 0; arg(n; t; Arg); not ? occurs ? in(s; Arg); N 1 := n ? 1; not ? occurs ? in(N 1; s; t). We now prove three claims which obviously imply that C satis es the appropriate requirement.
Claim 1 jAj > jBi j for i = 1; 2; 4. Proof. Notice that jAj > 0 = jBi j for i = 1; 2; 4:
2
Claim 2 Suppose that I 0 j= hB ; B ; i. Then jAj > jB j. Proof. By the semantics of the built-in's > and arg we have s = s, t = t, Arg = tn and 1 n a(t). So 1
2
3
jnot ? occurs ? in(n; s; t)j = nodes(t) ?
X
a(t)
i=n+1
nodes(ti )
> nodes(tn ) + 1 = jnot ? occurs ? in(s; tn )j:
2
Claim 3 Suppose that I 0 j= hB ; B ; B ; B ; i. Then jAj > jB j. Proof. By the semantics of the built-in's >, arg, := and of the relation not ? occurs ? in we have s = s, t = t, Arg = tn , N 1 = n ? 1( 0) and (1 n a(t)). So 1
2
3
4
5
jnot ? occurs ? in(n; s; t)j = nodes(y) ? >
at X nodes(t) ? ( )
i=n?1
at X ( )
i=n+1
nodes(ti )
nodes(ti ) = jnot ? occurs ? in(n ? 1; s; t)j:
2
To prove that UNIFY' is acceptable we consider the level mapping and -model de ned in and we treat not-occurs-in as built-in relation whose semantics is given by I 0 .
UNIFY
36
Lemma 4.12 UNIFY' is acceptable w.r.t. j j and I . Proof. Notice that if C = A B ; : : : ; B is an instance of (u0 ) (resp. of (u0 ) exchanging the 1
4
2
positions of s and t in the body of the clause), then C is of the form
3
unify(s; t) var(s); nonvar(t); not ? occurs ? in(s; t); s = t: If I j= hB1 ; B2 ; B3 ; i then by the semantics of the built-in's var, nonvar and of the relation not ? occurs ? in we have s = s and t = t, i.e. not-occurs-in(s; t) does not modify s and t. It follows that the proof that UNIFY' is acceptable w.r.t. j j and I is analogous to the one for UNIFY given in Theorem 4.9. 2
Acknowledgements We thank Annalisa Bossi and Kees Doets for helpful discussions on the subject of the Good Tuple Lemma 2.5.
References [AD92]
K.R. Apt and K. Doets. A new de nition of SLDNF-resolution. Technical report, Department of Mathematics and Computer Science, University of Amsterdam, The Netherlands, 1992. to appear. [AMP92] K.R. Apt, E. Marchiori, and C. Palamidessi. A theory of rst order built-in's of Prolog. Res. Report CS-R9216, CWI, Amsterdam, 1992. [AP90] K. R. Apt and D. Pedreschi. Studies in pure Prolog: termination. In J.W. Lloyd, editor, Symposium on Computional Logic, pages 150{176, Berlin, 1990. SpringerVerlag. [Apt90] K. R. Apt. Logic programming. In J. van Leeuwen, editor, Handbook of Theoretical Computer Science, pages 493{574. Elsevier, 1990. Vol. B. [BCF91] A. Bossi, N. Cocco, and M. Fabris. Proving termination of logic programs by exploiting term properties. In Proceedings of Tapsoft '91, pages 153{180, 1991. [Bez89] M. Bezem. Characterizing termination of logic programs with level mappings. In E. L. Lusk and R. A. Overbeek, editors, Proceedings of the North American Conference on Logic Programming, pages 69{80. The MIT Press, 1989. [Bor89] E. Borger. A logical operational semantics of full Prolog, Part III: Built-in predicates for les, terms, arithmetic and input-output. In Y.Moschovakis, editor, Proceedings Workshop on Logic from Computer Science. Springer MSRI Publications, 1989. [Cav89] L. Cavedon. Continuity, consistency, and completeness properties for logic programs. In G. Levi and M. Martelli, editors, Proceedings of the Sixth International Conference on Logic Programming, pages 571{584. The MIT Press, 1989. [Cla79] K. L. Clark. Predicate logic as a computational formalism. Technical Report DOC 79/59, ico{ London, 1979. [DF87] P. Deransart and G. Ferrand. An operational formal de nition of Prolog. In Proceedings of the 4th Symposium on Logic Programming, pages 162{172. Computer Society Press, 1987. 37
[FLMP89] M. Falaschi, G. Levi, M. Martelli, and C. Palamidessi. Declarative modeling of the operational behaviour of logic languages. Theoretical Computer Science, 69:289{318, 1989. [HL88] P.M. Hill and J.W. Lloyd. Analysis of meta-programs. In H.D. Abramson and M.H. Rogers, editors, Proceedings of the Meta88 Workshop, pages 23{52. MIT Press, 1988. [Llo87] J. W. Lloyd. Foundations of Logic Programming. Springer-Verlag, Berlin, second edition, 1987. [Pie91] C. Pieramico. Metodi formali di ragionamento sulla terminazione di programmi prolog. Technical Report Tesi di Laurea, Universita degli Studi di Pisa, 1991. in Italian. [Plu91] L. Plumer. Automatic termination proofs for prolog programs operating on nonground terms. In Proceedings of the 1991 International Logic Programming Symposium, 1991. [SS86] L. Sterling and E. Shapiro. The Art of Prolog. MIT Press, 1986. [vEK76] M.H. van Emden and R.A. Kowalski. The semantics of predicate logic as a programming language. Journal of the ACM, 23:733{742, 1976.
5 Appendix
5.1 Proof of the Good Tuple Lemma
We rst prove the lemma in absence of built-in's. Let be an LD-refutation of P [ f A; Bg with c.a.s . Consider the rst goal H in which is an instance of B. H exists since is a refutation. Let 0 be the composition of mgu's used in this pre x of ending with H . Then H = B0 . Informally, consists of a refutation of A followed by a refutation of an instance of B. To prove the claim we need to analyze these two refutations more precisely. The rst one is of the form G0; : : :; Gm with G0 = A and Gm = 2, and is obtained by deleting from all goals in the just de ned pre x of the last k atoms, where k is the number of atoms in B. In this sequence each goal Gi for i 2 [1; m] is obtained from the previous one using an mgu i?1 and a variant ci?1 of a clause of P . The standardizing apart condition means that for i 2 [0; m ? 1] Var (ci ) \ (Var (A; B) [
[ Var (c )) = ;:
i?1
j =0
j
(36)
By assumption each i is relevant, so for i 2 [0; m ? 1] Var (i ) Var (Gi) [ Var (ci ); and by the way Gi+1 is formed and (37) Var (Gi+1) Var (Gi ) [ Var (ci ): Using (38) i times we conclude by (37) that for i 2 [0; m ? 1]
38
(37) (38)
[i Var (c );
Var (i ) Var (A) [
(39)
j
j =0
since A = G0. But for any substitutions ; we have Var ( ) Var ( ) [ Var (), so iterating this observation m times we get Var (0 )
[ Var ( );
m?1
(40)
i
i=0
since 0 = 0 : : :m?1. Now, by (39) and (40) we conclude Var (0 ) Var (A) [
[ Var (c ):
m?1
i
(41)
Var (0 ) \ Var (B) Var (A): Then = 0 j Var (A) is a c.a.s. of P [ f Ag. As a consequence of (42)
(42)
i=0
Thus by (41) and the standardization apart condition (36)
H=
B0 =
B(0 j Var (B)) =
B(0 j Var (A)) =
B:
(43) Now we need to consider the second refutation. It is the sux 0 of beginning with H = B. Let 0 be the composition of mgu's used in this sux. Then = 0 j Var (B) is a c.a.s. of P [ f Bg. We have Ran () Ran (0 ) Var (0 ), so condition (i) of the de nition of good tuple follows by (42). 0 is of the form Hm+1; : : :; Hn with Hm+1 = H and Hn = 2. In this sequence each goal Hi for i 2 [m + 2; n] is obtained from the previous one using an mgu i?1 and a variant ci?1 of an clause of P . By the same reasoning as the one which led to (41) we conclude that Var (0) Var (B) [
since 0 = m: : :n?1 and by (43) Hm+1 = H =
[ Var (c );
n?1
i=m
i
B. Thus
Var (0) \ (Var (A; B) [ Var (0 )) f(44) and (41)g S S (Var (B) [ in=?m1 Var (ci )) \ (Var (A; B) [ mi=0?1 Var (ci )) f(36)g Var (B),
so we proved that 39
(44)
Var (0) \ (Var (A; B) [ Var (0 )) Var (B): (45) Since Ran () Ran (0 ) Var (0) and Ran () Ran (0 ) Var (0 ), (45) implies condition (ii) of the de nition of good tuple. We have by (42)
0 j Var (A; B) = 0 j Var (A) = ;
and by (45)
0 j (Var (A; B) [ Ran (0 )) = 0 j Var (B) = : But for any substitutions ; and a set of variables A we have ( ) jA = ( jA j (A [ Ran ( )) ) jA;
so
(46) (47) (48)
fde nition of 0 and 0g (0 0) j Var (A; B) = f(48)g ( 0 j Var (A; B) 0 j (Var (A; B) [ Ran (0 )) ) j Var (A; B) = f(46) and (47)g () j Var (A; B). =
(() For a derivation ' we denote by Var (') the variables which appear in the goals of ', or in the mgu's used in ' or in the variants of clauses used in '. Take an LD-refutation of P [ f Ag with c.a.s. . Rename those of its variables that do not occur in Var (A) [ Ran () by some fresh variables dierent from those in Var (B) [ Ran (). In such a way we obtain an LD-refutation for P [ f Ag with c.a.s such that for every used variant c of a clause of P and Thus for every such c
Var (c) \ Var (B) Var (A) [ Ran ()
(49)
Var (c) \ Ran () Var (A) [ Ran ():
(50)
Var (c) \ Var (A; B) fVar (A; B) = Var (A) [ Var (B)g (Var (c) \ Var (A)) [ (Var (c) \ Var (B)) fstandardization apart conditiong Var (c) \ Var (B)
40
f(49) and elementary set theoryg (Var (c) \ Var (A)) [ (Var (c) \ (Ran () \ Var (B))) f (A; B; ; ) good tuple g (Var (c) \ Var (A)) [ (Var (c) \ Var (A)) = fstandardization apart conditiong ;,
i.e. Var (c) \ Var (A; B) = ;: (51) Replace now in every goal C by C; B C, where C is the composition of the mgu's used in the pre x of ending with C. We claim that in such a way we obtain a pre x 0 of an LD-derivation for P [f A; Bg. To this end it suces to check that the standardization apart condition is satis ed. But this is a consequence of (51) and the fact that is an LD-derivation. By de nition of the last goal of 0 is B. Consider now an LD-refutation of P [ f Bg with c.a.s. . Rename those of its variables that do not occur in Var (B) [ Ran () by some fresh variables which do not occur in 0. In such a way we obtain an LD-derivation of P [ f Bg with c.a.s. such that for every used variant d of a clause of P
Thus for every such d
Var (d) \ Var ( 0) Var (B) [ Ran ();
(52)
Var (d) \ Var (A; B) fVar (A; B) Var ( 0), (52) and elementary set theoryg (Var (d) \ Var (B)) [ (Var (d) \ (Ran () \ Var (A; B))) f (A; B; ; ) good tupleg (Var (d) \ Var (B)) [ (Var (d) \ Var (B)) = fstandardization apart conditiong ;,
i.e.
Var (d) \ Var (A; B) = ;: Moreover, for every variant c of a clause of P used in 0
Var (c) \ Var (d) fVar (c) Var ( 0), (52) and elementary set theoryg (Var (d) \ Var (B)) [ (Var (c) \ Var (d) \ Ran ()) fstandardization apart conditiong Var (c) \ Var (d) \ Ran () f(50)g Var (d) \ (Var (A) [ Ran ()) \ Ran () f (A; B; ; ) good tupleg
41
(53)
Var (d) \ Var (B) = fstandardization apart conditiong ;,
i.e.
Var (c) \ Var (d) = ;: (54) Now, (53) and (54) imply that 0 followed by is an LD-refutation of P [ f A; Bg, since the standardization apart condition is satis ed. This concludes the proof in absence of built-in's. To prove the lemma in presence of built-in's we need to take into account that in some cases, like that of functor, the generated mgu's are not relevant. In such a case, instead of (39) we can only claim [i (55) Var (i ) Var (A) [ Var (cj ) [ X; j =0
where X is a set of variables such that X \ Var (A; B) = ;, and ci is a variant of an arbitrary clause in P if the selected atom in Gi?1 has a built-in relation. But then (55) still implies (42), and the proof that and are respectively c.a.s. of P [ f Ag and c.a.s. of P [ f Bg goes through. Moreover a similar modi cation of (44) ensures that the proof that (A; B; ; ) is a good tuple goes through and that = () j A; B. Finally, the proof of (() remains valid. 2
5.2 Proof of the Equivalence I Theorem
The following lemma is needed. By the length of a goal we mean the number of its atoms. For a goal G we denote its length by l(G).
Lemma 5.1 Let be a substitution. Then is the c.a.s. of the goal s = t if and only if = mgu(s; t). Proof. Let G = ( s = t): We prove the lemma by induction on the length l(G) of G.
If l(G) = 0 then G = true. Then is the c.a.s. for G i = i = mgu((); ()). If l(G) > 0 then we have that s = s1; : : : ; sn and t = t1; : : : ; tn with n > 0. By the semantics of = and by De nition 2.1 is the c.a.s. for G i = () j G, where = mgu(s1; t1 ) and is the c.a.s. for G0 = (s2 = t2; : : : ; sn = tn ). Since l(G0) < l(G), by the induction hypothesis applied to G0 we have = mgu((s2; : : : ; sn ); (t2 ; : : : ; tn )). Thus mgu(s; t) = . Moreover is idempotent, hence relevant, so () j G = . 2 We can now prove the Theorem. If G = true then = and the claim follows immediate. Otherwise let G = A; G0. Suppose that = G0; G1; : : : ; Gn with G0 = G is a refutation of P [ fGg with c.a.s. . Let 1 ; : : : ; n and C1; : : : ; Cn be respectively the sequence of mgu's and of input clauses used in . For every i 2 [0; n] let i be the partial derivation (i.e. a pre x of a derivation) of Hom (P ) [fGi g de ned as follows. Let Gi = Ai ; G0i . If Ai is a built-in atom then i = Gi; Gi+1 : If Ai is de ned in P (hence in Hom (P )) let Ci = H B. Consider the clause Hom (Ci ) = p(xi ) xi = ti ; B disjoint from G and from all input clauses of j , for every j < i. 42
Let i be the m.g.u. of p(xi) and Ai s.t. Dom (i ) = xi . Then R = (xi i = ti ; B; G0i) is the resolvent of Gi with input clause Hom (Ci ) and mgu i . By Lemma 5.1 there is a unique partial derivation i0 of Hom (P ) [ fRg with last goal Gi+1. Then
i = Gi; R; i0 : For every i 2 [0; n] the rst goal of i is Gi and the last goal is Gi+1. Then we can de ne 0 = 1 : : : n . By construction all input clauses in 0 are standardized apart. Thus 0 is a refutation of Hom (P ) [ fGg with c.a.s. (1 1 : : : n n ) j G. Moreover for all i 2 [0; n] and for all
j 0 Gk +1 is the resolvent of Gk with input clause Ci and mgu i . Then 0 = Gk1 ; : : : ; Gk with input clauses C1; : : : ; Cn and mgu's 1 ; : : : ; n is a refutation of P [ fGg with c.a.s. (1 : : : n ) j G. From Dom (i ) = xi and from the standardization apart Dom (i ) \ Var (G) = ; and Dom (i ) \ Var (j ) = ; for all j < i. Then (1 : : : n ) j G = . 2 i
i
n
5.3 Proof of the Equivalence II Theorem
The following two technical lemmas are useful.
Lemma 5.2 Let I be a -interpretation. Then I j= hs = t; i if and only if = mgu(s; t). Proof. The soundness and completeness of LD-resolution for goals containing only built-in
predicates follows from the Soundness I Theorem 2.11 and from the Completeness I Theorem 2.24, since the semantics of built-in predicates does not depend on the form of the considered program. Then I j= hs = t; i if and only if is a c.a.s. of P [ f s = tg. Then the claim follows from Lemma 5.1. 2
43
Lemma 5.3 Let I be a -interpretation, let B be a conjunct. If I j= h(s = t; B); i then there exists s.t. - I j= hs = t; i, with = mgu(s; t), - I j= hB ; i, - Ran () \ Var (s = t; B) Var (B ) and - = . Proof. We prove the lemma by induction on l(s = t). The case l(s = t) = 0 is immediate since then s = t is true and = . If l(s = t) > 0 then s = t is a non-empty sequence, say s = t ; s0 = t0 . By De nition 2.7 if I j= hs = t; B; i then there exists , s.t. 1. I j= hs = t ; i, 2. I j= h(s0 = t0 ; B); i, 3. Ran () \ Var (s = t; B) Var ((s0 = t0 ; B)), 1
1
1
1
4. = . By de nition of [ =]] it follows that = mgu(s1; t1). Since l((s0 = t0 )) < l(s = t), by the induction hypothesis applied to 2. there exists 0 s.t. 10 . I j= h(s0 = t0 ); 0 i, with 0 = mgu((s0); (t0 )), 20 . I j= hB0 ; 0 i, 30 . Ran (0) \ Var ((s0 = t0 ; B)) Var (B0 ), 40 . = 0 0. By 1. and 10. I j= hs = t; 0 i: By de nition of mgu on sequences of terms 0 = mgu(s; t). Then by 4., 4'. = 0 0: By 4'. Ran (0) Ran () and by 3. (Ran (0 ) \ Var (s = t; B)) Var ((s0 = t0 ; B)). Hence by 3'. Ran (0) \ Var (s = t; B) Var (B0 ): Thus the claim follows with = 0 and = 0 . 2
We can now prove Theorem 2.29. Suppose that I is a -model of P . Let p(x) x = t; B be a variant of a clause of Hom (P ). Let be s.t. Dom () = x; Ran () \ Var (x = t; B) = ; (56) and suppose that I j= h(x = t; B); i: (57) We need to show that I j= hp(x); j p(x)i. By (56) (x = t; B) = (x = t; B). By Lemma 5.3 and by (57) there exists s.t. I j= hx = t; 0 i with 0 = mgu(p(x); p(t)), I j= hB0 ; i; (58) Ran () \ Var (x = t; B) Var (B0 );
44
(59)
= 0 : (60) By (56) p(t) B is a variant of a clause of P disjoint with p(x). Since I is a model of P , then by (58) and (59) we get I j= hp(x); (0 ) j p(x)i and by (60) we get (0 ) j p(x) = j p(x). This proves that I j= hp(x); j p(x)i. Conversely suppose that I is a -model of Hom (P ). Let A be an atom, let C = H variant of a clause of P disjoint with A. Let = mgu(A; H ). Suppose that
I j= hB; i
B be a (61)
with
Ran () \ Var (A; H B) Var (B): (62) We need to show that I j= hA; () j Ai. Let Hom (C ) = p(x) x = t; B be disjoint from A. Let be the m.g.u. of A and p(x) s.t. Dom () = x:
(63)
= mgu(x; t);
(64)
Ran () \ Var (x = t; B) = ;;
(65)
(x = t; B) = (x = t; B)
(66)
Then
and
Ran () \ Var (p(x) Var (x = t); Var (x = t; B): By Lemma 5.2 and (64)
I j= hx = t; i:
(67)
(68) By (61), (68) and by the Completeness I Theorem 2.24 is a c.a.s. for x = t and is a c.a.s. for P [ f Bg. Since Ran () Var (x = t) and from (62) (x = t; B; ; ) is a good tuple. Hence by Lemma 2.5 () j (x = t; B) is a c.a.s. of P [ f x = t; Bg. By the Soundness I Theorem 2.11 I j= h(x = t; B); () j (x = t; B)i. But I is a model of Hom (P ), so from (() j x = t; B) j p(x) = () j p(x) = () j A and from (66), (63), (65) and (67) we conclude that I j= hA; () j Ai. 2
45