Continuations in Prolog - CiteSeerX

0 downloads 0 Views 245KB Size Report
the logic programming context to give a denotational semantics for Prolog, to generate Prolog ... Two built-ins allow the user to jump, during the resolution of one goal, to the next ..... The problem of unifying two terms may have many solutions.
Continuations in Prolog Pascal Brisset

ECRC Arabellastrae 17, D-8000 Munich 81, Germany

[email protected]

Olivier Ridoux

IRISA Campus Universitaire de Beaulieu, F-35042 Rennes Cedex, France [email protected]

Abstract Continuations are well known in functional programming where they have been used to transform and compile programs. Some languages provide explicit manipulations of the continuation for the user: The user can catch and modify the current continuation. Continuations have also been used in the logic programming context to give a denotational semantics for Prolog, to generate Prolog compilers and to transform Prolog programs. In this paper, we propose to introduce new built-ins in a logic programming language to enable the user to explicitly replace the continuations. These built-ins allow the user to have a new control of the execution. We choose Prolog because of its higher-order syntax and implications in the goals which are necessary for the de nition and use of these built-ins. In order to de ne the built-ins, we extend to Prolog the Prolog semantics based on continuations. Then, we show that an exception mechanism can be easily implemented using these new built-ins. The proposed semantics is also used to prove equivalence of goals changing the continuations.

Keywords Continuations, Prolog, semantics, cut, exception handling.

1 Introduction A continuation is a function which denotes the rest of the computation. The idea is to give the continuation as an argument to each called function. Then, functions do not return but simply execute the continuation. For example, Continuation-Passing Style (CPS) is a simple way to accomplish the tailrecursive optimisation in programs of functional programming languages. Continuations are powerful tools in the descriptive techniques of semantics. They lead to simple expressions for standard constructs and enable the description of exception mechanisms [5]. From continuation based semantics, it is possible to derive automatically compilers [1]. This approach is attractive because it closely relates the formal speci cation of a language

and its implementation. Continuations are useful objects for the compiler designer and for the language user too. In some functional languages (Scheme), the continuation can be explicitly handled by the user. The user is allowed to catch (callcc for \call with current continuation") the continuation and to replace the continuation by one caught earlier (throw) [1]. Manipulations of the continuation allow the control of execution to be changed. The classic example using such manipulations is the programming of an exception mechanism. Nicholson & Foo presented a denotational semantics, based on continuations, of Prolog in [12]. The important di erence with the functional case is that two continuations are needed: a success continuation for the AND control and a failure continuation for the OR control. As for the functional case, it is possible to derive a compiler from this semantics [4]. Some transformations of logic programs are also based on CPS [16, 15, 17]. CPS have been used to compile Prolog into an imperative language too [9]. We propose to introduce new built-ins to handle the continuations in the logic programming language Prolog. Prolog, proposed by Miller [11], is an extension of Prolog where rst-order terms are replaced by terms and horn clauses are replaced by higher-order hereditary Harrop formulas. We will see that the terms are necessary to have a proper representation of the catch of the success continuation. The implication in goals of the Harrop formulas makes the use of the new built-ins easier. To de ne these new built-ins in Prolog, we extend the semantics of Nicholson & Foo [12] to the case of Harrop formulas. We called it an operational semantics because the mapping is not done into mathematical objects but into the language of an abstract machine. Two built-ins allow the user to jump, during the resolution of one goal, to the next goal. The two others generalise the standard cut (!) and the local cut to remove choice-points. A variant is known as ancestral cuts [13]. The novelty is that we express these cut managements simply and soundly with our continuation based semantics. Use of these new built-ins is illustrated by examples. We propose the implementation of the block exception mechanism (a variant of the catch&throw proposed by the ISO standardiation committee [8]) in Prolog and a solution to the problem that metacalling cut (call(!)) has no e ect. The semantics allows us to prove properties about what the built-ins actually do. The paper is structured as follows: In section 2, Prolog types, terms, formulas and proofs are quickly reviewed. In sections 3 and 4, operational semantics of Prolog and Prolog programs are given. In section 5, we propose four built-ins to replace explicitly continuations, examples and formal proofs using our semantics.

2

Prolog

As we will give source code of Prolog in the following sections, description is given using the concrete syntax of Prolog. We brie y (and informally) present the extension of Prolog to higher-order hereditary Harrop formulas.

2.1 Types, Terms & Formulas

Simple types are rst-order terms built from a collection of type constants, a collection of type variables and one dedicated binary type constant, '->' (right associative). Intuitively, A -> B is the type of functions from terms of type A to terms of type B. Simply typed -terms are built from a collection of constants, a collection of variables, a collection of unknowns (logic variables) and two construction rules, abstraction (x\ E ) and application ((E F )). Each constant must be declared with the 'type' directive, specifying the type of the constant: type cons A -> (list A) -> (list A). declares cons as a constant of type A -> (list A) -> (list A). Three equivalence relations are de ned on the set of terms. -equivalence de nes consistent renaming of variables. -equivalence de nes the consistent reduction of an application (x\ E F ) (called a -redex) into E where each free occurrence of x is substituted by F .  -equivalence formalises extensionality of -de ned functions. The uni cation of simply typed terms taking into account the three equivalence relations is a non-decidable and in nitary problem: there may be in nitely many most general solutions. So, the algorithm of uni cation is not deterministic. A Prolog program is a set of declarations and an ordered set of clauses. Like in Prolog, the constructor of non-empty body clauses is ':-', the conjunction constructor is ','. A universally quanti ed goal is written (pi x\ G) (\forall x G") and an implication is written (H => G) (\H implies G").

2.2 Proofs & Search

The evaluation of a logic programming language program relies on two choices: a class of proofs and a search-strategy. The class of proofs is given by inference rules and constraints on application of them. Miller showed that uniform proofs (roughly, goal directed proofs) are complete for Prolog with respect to intuitionistic provability [10]. If many proofs exist, the searchstrategy speci es how they are enumerated.

Proofs A Prolog execution is a proof of a goal G using a program P with

a signature  which is the set of constants appearing in P . This statement is written with the following sequent: ; P ?! G. We call term, a term whose only constants are in . Proofs are done using the following inference rules:

; P ?! G1 ; P ?! G2 AND ; P ; A:-G ?! G IMP R ; P ?! G1 ,G2 ; P ; A:-G ?! A L ; P ; A ?! A AX  + c; P ?! G[x c] ALL c 2=  R ; P ?! pi x nG

; H; P ?! G ; P ?! H =>G IMPR

Proofs for Prolog only need the rst three rules and do not require P and  which are never modi ed in these rules. In the second and third rules,  is a closed substitution on  (i.e., A is a term). The last two rules are the new ones of Prolog. The rule for universal quanti cation must be read as follows: \To solve (pi x\ G), pick a new constant c, add it to the signature and solve G where free occurrences of x are replaced by c". The important thing is that c is a new constant which cannot appear out of the current subproof. The rule for implication must be read as follows: \To solve (H =>G), add H to the program and solve G". An implication is not an assert for at least two reasons: The added clause H is removed after the resolution of G and may contain unknowns (logical variables) which will not be copied when the clause is used.

Strategy We choose a depth- rst search as in Prolog. Because programs contain side-e ects, the programmer needs to know the order of evaluation of goals. As in Prolog, the leftmost goal is solved rst.

3 An Operational Semantics for Prolog In this section, we sum-up the denotational semantics of Nicholson & Foo [12] for Prolog. This semantics will be extended to the case of Prolog in the next section and used to de ne the new built-ins in section 5. Our semantics is described as a translation of logic programs into programs of an abstract machine. The translation is given step by step ( nal translation for Prolog is given in gure 1). Success and failure continuations are introduced one at a time, then the cut is handled. We do not want to translate terms but only formulas so we need an abstract machine that is able to handle terms and to do uni cation on them. To do that, we need three languages: LT used for writing the translation, the source language LS (concrete syntax of Prolog) which is translated and the language LAM of the abstract machine for which code is produced. In this section, LS items are Horn-clauses. For the sake of simplicity, we only deal with predicates of arity 1. It is not a restriction: Greater arities can be dealt with using term constructors. So, for a predicate p, a clause is either an atomic formula (p X ) or a clause (p X :-G). A goal is an atomic formula or a conjunction of goals (G1,G2).

3.1 Success Continuation

First, we assume predicates have only one clause, so there is no backtracking after a failure. We temporarily make (c.f. 3.2) this assumption to introduce one continuation at a time. To execute such programs, only one continuation is needed, called success continuation. To do the translation, three functions of LT are needed: x:T [ x] , x:Tp[ x] and x:Tg[ x] translate respectively a predicate, a set of clauses and a goal. The translations produce functions which are composed to give instructions of the abstract machine. The functions take the two following arguments :   stands for the predicate argument   stands for the success continuation The translation of a term of LS (a predicate argument) to a term of LAM is done with the x:[ x] function. Here, it may be seen as the identity function. We start with a three instructions abstract machine. The instruction unify takes two terms and two continuations. If uni cation of the two terms is possible, the rst continuation is executed, otherwise the second continuation is executed. In this section, the second continuation is always the instruction abort whichs stops the evaluation. The goto instruction takes a predicate name p, a predicate argument , a continuation  and executes the code associated to p with the arguments  and . This instruction is not called call because the code associated to p never returns. The unify and goto instructions are formally de ned by the following equations:  if t1 uni es with t2 unify  t1 t2xy: yx otherwise goto  p:(T [ p]]  ) ( p:T [ p]]) The following equations de ne the translation from LS to LAM. T [ p]]  Tp [ < p clauses >]] Tp [ p X]]  :(unify [ X]]   abort) Tp [ p X :-G]]  :(unify [ X]]  (Tg [ G]] ) abort) Tg [ q X]]  :(goto q [ X]] ) Tg [ G1; G2]  :(Tg [ G1] (Tg [ G2] )) The rst equation just says that the translation of a predicate is the translation of its clauses (only one in this section). Intuitively, the code T [ p] will be executed to solve a goal p T . The second equation says that to solve a goal (p ) with a clause p X , uni cation of X and  must be called with the two continuations  and abort. If the uni cation succeeds  will be executed. With a non empty body clause (third equations), the next thing to do if the uni cation succeeds is to evaluate the body: the continuation given to unify is the translation of the body (Tg [ G] ) applied to the current continuation ().

The rst equation of Tg [ ] simply translates an atomic goal into a call to the appropriate predicate (done with the goto instruction). The last equation composes codes associated to two goals in a conjunction. For example the clause p (f X Y ):-q X; p Y is translated into (after being reduced) :(unify [ (f X Y )]]  (goto q [ X ] (goto p [ Y ] )) abort)

3.2 Failure Continuation

Now, we have to deal with real programs, i.e. predicates with several clauses (the separator is the dot). After a failure in uni cation, the remaining clauses of a predicate (the current one or another one) have to be tried. To do that, we need a second continuation called failure continuation which is the code to execute after a failure. This continuation is noted  . It is an argument of the functions resulting from the translation. It is also an argument of the success continuations (). Indeed, after a failure, backtracking is done on the last choice, which is the current failure continuation. For example, the predicates true and false will be translated as follows: T [ true]]  :( ) T [ false]]  : As said before, we suppose that the abstract machine handles terms and has a uni cation algorithm. The uni cation produces a substitution which is applied to the success continuation and not applied to the failure continuation. Here are the new equations for the translation: Tp[ C1:C2]  :(Tp [ C1]   (Tp [ C2]   )) Tp [ p X]]  :(unify [ X]]   ) Tp [ p X :-G]]  :(unify [ X]]  (Tg [ G]] ) ) The rst equation is the new one. It must be related to the Tg [ ] equation for conjunction of goals: A conjunction of goals leads to a composition of success continuations, a conjunction of clauses leads to a composition of failure continuations. Equations for Tg [ ] are the same as before (by  equivalence)

3.3 Cut

We show here how to introduce the standard cut (!) in our translation. A cut removes the choice point on the clause following the one it is in and the choice points left by the evaluation of the leftmost goals. With the continuation scheme, we can say that cut replaces the failure continuation

by the failure continuation of the goal that called the current clause. The latter, called  , needs to be passed to Tg [ ] . The valuation for the goal ! is straightforward: just replace the failure continuation by the cut failure continuation  . We get the equations of gure 1.

T [ p]]  Tp[ C1:C2]  Tp [ p X]]  Tp [ p X :-G]]  Tg [ q X]]  Tg [ !]  Tg [ G1; G2] 

:(Tp [ < p clauses >]]    ) :(Tp [ C1]    (Tp [ C2]    )) :(unify [ X]]   ) :(unify [ X]]  (Tg [ G]]  ) ) :(goto q [ X]]  ) :( ) :(Tg [ G1] (Tg [ G2]  )  )



) if t1 uni es with t2 unify  t1 t2: (  otherwise goto  p:(T [ p]]   ) = p:T [ p]]

Figure 1: Prolog Semantics

4 An Operational Semantics for Prolog Prolog di ers from Prolog by its terms and formulas. Our purpose is to

translate formulas into control. As we said before, the translation [ ] of LS terms to LAM terms may just be identity. So, now we suppose that [ ] is a function which translates terms of LS to terms of LAM. On the other hand, we modify the translation for the new goals of Prolog, universal quanti cation and implication. The uni cation algorithm of the abstract machine must now be able to unify terms. We also suppose that the abstract machine is deterministic. But a problem of uni cation on terms may have many solutions. Because of this new feature, the continuations will be a ected by uni cation. In this section, we present the main features of the translation of Prolog. The complete de nition of the translator and the abstract machine for Prolog is given in gure 2.

4.1 Implication

Solving an implication follows the inference rule: ; H; P ?! G IMP ; P ?! H =>G R

This rule modi es the program. So instructions of the abstract machine have a new argument: the current program. In fact, static clauses (standard ones) cannot be removed from the program, they always are in the current program. Because of that, only clauses added by the implication rule have to be represented. They are called dynamic clauses and denoted  .  is a set of pairs < p; Tp > where p is a predicate and Tp is the translation of the dynamic clauses associated to p. We need two new instructions in the abstract machine. One called dyn, taking a predicate p, executes the code Tp associated to p in  . The instruction add executes code with an augmented  : add  pThTg:(Tg     ) where  stands for  where the clauses Tp associated to p are replaced by :(Th   (Tp    )) which is the composition of the clause Th with the clauses Tp (as in Tp [ C1:C2] ). This can be read \executes the code Tg with a new clause Th for the predicate p". For dyn and add instructions, if there is no clause already associated to p in  , then Tp is equal to : (translation of fail). The instruction dyn which executes the code of dynamic clauses may be produced by the translation of a predicate or a goal. We prefer to produce it with the translation of a predicate because it is easy with a static analysis or a user declaration to know the predicates for which some dynamic clauses may be added. For such predicates, T [ ] is changed. We also add a Tg [ ] equation for translation of implication: T [ p]  :(dyn p     (Tp[ < p clauses >]      )) Tg [ H =>G]  :(add p Tp[ H ] Tg [ G]     ) In the second equation p is the predicate symbol of the clause H .

4.2 Universal Quanti cation

The resolution of a universal quanti cation is given by the inference rule:  + c; P ?! G[x c] ALL c 2=  R ; P ?! pi x nG In this rule the signature is modi ed. So the signature is now represented and is an argument of the abstract machine instructions. The signature called  is the set of new constants added by universal quanti cations. Like the add instruction for implications, we need a instruction called all to augment the signature. The signature is a list of constructor cons. The all instruction picks a new constant and executes the code, where the quanti ed variable is substituted by the new constant, with the augmented signature: all  Tg:(Tg c   (cons c )   ) with c 2= 

The all instruction is produced by translation of universally quanti ed goals: Tg [ pi G]  :(all C:Tg[ (G C )]]      ) Here, [ ] is really supposed to be the identity: C in C: stands for a term of LAM while in (G C ), C stands for a term of L. The control for handling universal quanti cation is simple. As a matter of fact, problems with pi come from uni cation: occurrences of new constants must be checked. To do that, each unknown has an explicit scope. An unknown can be bound only to terms with constants in its scope. It is not our purpose here to describe how uni cation handles that problem (see [2, 3]). Note: Unlike the failure continuation, the dynamic clauses and the signature are in the success continuation: they are not arguments of the success continuation. That di erence appears in Tp[ p X :-G] and Tg [ B1; B2 ] (equations 4 and 7 in gure 2): In the composition,  and  appear beside  but  does not.

4.3 Non-Deterministic Uni cation

The problem of unifying two terms may have many solutions. For example, uni cation (F 1) and 1 has two solutions: F = x:x and F = x:1. In fact, using Huet's algorithm [7], solutions of a uni cation problems are branches of a tree which may be in nite. For eciency reasons, this tree is explored with a depth- rst search (like for Prolog, the completeness is lost). This choice allows the uni cation search to be plugged into the proof search by using the same failure continuation. Then, the non-deterministic problems of uni cation will be lifted to the Prolog level. That means that the nondeterministic part (called MATCH by Huet) of the uni cation procedure is written in Prolog and that the unify instruction may add a match goal in the success continuation. Our purpose is not to give details about the match predicate but to show that a strong modi cation of the uni cation is simply handled in the continuation's scheme. The abstract machine is slightly modi ed and its control is not changed. The nal description of the abstract machine is in gure 2.

5 New Built-Ins to Replace Continuations In the previous section, we showed that the execution of a Prolog program may be seen as the manipulation of two continuations by instructions of a low-level machine. Now, we propose to introduce new built-ins in Prolog to pull up the continuations to the language level. The idea is to permit the user to get and replace the current continuations.

T [ p]]  Tp[ C1:C2]  Tp [ p X]]  Tp [ p X :-G]]  Tg [ q X]]  Tg [ !]  Tg [ G1; G2]  Tg [ pi G]]  Tg [ H =>G]] 

:(dyn p      (Tp [ < p clauses >]]      ))(1) :(Tp [ C1]      (Tp [ C2]      )) (2) :(unify [ X]]     ) (3) :(unify [ X]]  (Tg [ G]]    )   ) (4) :(goto q [ X]]    ) (5) :( ) (6) :(Tg [ G1] (Tg [ G2]    )    ) (7) :(all c:Tg [ (G c)]]     ) (8) :(add p Tp [ H]] Tg [ G]]     ) (9) In the equation (9), either H = p X :-G or H = p X. 8 one sol. < ( ) unify  t1 t2: : (goto match < t1 ; t2 >    ) several sol.  no sol. goto  p:(Tp [ p]]     ) (10) all  Tb :(Tb c   (cons c )  ) (11) dyn  p:(Tp      ) (12) add  pTh Tb :(Tb     ) (13)

In the last equation (13),  stands for  where the clauses Tp associated to p (like in equation 12) are replaced by :(Th      (Tp       )). Figure 2: Prolog Semantics After recalling manipulations of continuations in a functional programming language, we give four new built-ins to do such manipulations in Prolog. These built-ins are illustrated with examples. Using the semantics of the previous section, we are able to prove equivalences of goals in which these built-ins appear. Note that these new built-ins are all available in the Prolog/Mali system [3] which is a compiler for Prolog.

5.1 Continuations in Functional Programming

The single continuation in a functional programming language is roughly the same as our success continuation. In some Scheme dialects, this continuation is made explicit at the language level. The user can catch the continuation using the prede ned callcc and replace the current continuation using the prede ned throw. The argument of callcc must be a function which is applied to the current continuation. For example the call (using a Lisplike syntax) (callcc (lambda (X) (throw X 1))) just gets and immediately calls the current continuation with the value 1 (then, it is equivalent to the expression 1).

5.2 Replacing the Success Continuation in Prolog

We give the de nition of two new built-ins, two examples and a justi cation of using Prolog instead of standard Prolog.

5.2.1 De nitions

As our success continuation is similar to the continuation of functional languages, throw and callcc can be o ered in Prolog. The requirement is that the continuation must be a term. If the continuation is only caught and set, the structure of the continuation does not need to be known. A success continuation has to be applied to a failure continuation. When the current success continuation is replaced by one which has been caught, we have the choice to apply it to the current failure continuation or to the failure continuation present when the success continuation was caught. We choose the second solution: choice points created between the catch and the replacement no longer exist after the replacement. So, success continuation and failure continuation will be caught together: the caught object is the application of the rst to the second. We call this object the full continuation. We de ne the type s for the full continuation. The argument of callcc is a function from a full continuation to a goal. The argument of throw is a full continuation. Unlike in the functional languages, throw does not have a second argument: predicates do not return values. Types and de nitions, using our Prolog semantics, of callcc and throw are in gure 3. The argument of callcc will be typically an abstraction (K\ (Goal K)). type callcc (s -> o) -> o. type throw s -> o.

Tg [ (callcc G)]]  :(Tg [ (G ( ))]]     ) Tg [ (throw K)]]  :K Figure 3: Replacing the success continuation

We can de ne callcc in another way in order to hide the throw to the user. With the following de nition: Tg [ (callcc G)]]  :(Tg[ (G (throw (  )))]]      ) The object passed by callcc is the goal throw which does the change of continuation. Then, object caught with callcc is not throwed but only called. The two de nitions are equivalent. We choose the rst in order to attach a clear syntactic look to the replacement of the continuation.

5.2.2 Example: Avoiding Useless Evaluation

Suppose you want to evaluate the product of the elements of a list. You can de ne the following predicate: product [] 1. product [X | L] P :- product L P1, P is X * P1.

Using callcc and throw, we are now able to write a Prolog program which evaluates the product avoiding useless multiplications (just noticing that if an element is equal to 0 then the product is equal to 0). product L P pro [] 1 _K pro [X | L] pro [X | L]

:- callcc K\ (pro L P K P). _P. _P K 0 :- 0 is X, throw K. P K GP :- pro L P1 K GP, P is X * P1.

The second argument of product and pro is the result of the product. The third argument of pro is the caught continuation. The fourth argument of pro is the result if the continuation is used (i.e. the evaluation is stopped). The only one speci c feature of Prolog used here is the higher-order notation for the argument of callcc. The program can really be improved using the implication of Prolog: product L P :- callcc K\ ((zero K :- P = 0) => pro L P). pro [] 1. pro [X | L] _P :- 0 is X, zero K, throw K. pro [X | L] P :- pro L P1, P is X * P1.

The implication adds a clause which stores the continuation K and, if used, will unify the nal result P with 0. When a 0 is encountered in the list, this continuation is simply caught (zero K) and set (throw K).

5.2.3 Example: The Block Mechanism

This use of the continuation manipulation can be generalised to an exception mechanism. We show here how to program the standard \blocks" of Prolog in Prolog. block Label Goal :- callcc K\ (block_state Label K => Goal). block_exit Label :- block_state Label K, throw K.

stores the current continuation (caught with callcc) with the Label using the implication of the clause (block_state Label K). Then, Goal is called. If during the execution of Goal there is a call of (block_exit L), L and Label are uni ed and the caught continuation is set. In fact (block_exit L) looks for the more recent invocation of block which Label uni es with L. To get this behaviour, it is important that implications are handled in a stack-like style: last added, rst tried. Then, our rst example may be rewritten using the general mechanism: (block Label Goal)

product L P :- block (result P) (pro L P). pro [] 1. pro [X | L] _P :- 0 is X, block_exit (result 0). pro [X | L] P :- pro L P1, P is X * P1.

5.2.4 Why Prolog?

We use two speci c features of Prolog, the lambda notation to de ne callcc and the implication to have a clean use of it. In the block mechanism, we can not replace the implication by a classic assert because labels are uni ed: then information can be returned through this label. In our example, the nal result P is uni ed with 0 when labels are uni ed. With an assert, we would have to restrict labels to be ground terms. The callcc can not be easily de ned (nor used) without the lambda notation. Indeed a get_cc which would only get the current continuation (de ned by Tg [ getc c K ]  :(unify [ K ]      )) does not work. For example, you reasonably may expect that the goal get_cc K, throw K is equivalent to true but in fact, it causes a loop: the rst goal of the continuation K is throw K ! The other solution is to use a false quanti cation with a logic variable like in the bagof: callcc K^(throw K). It leads to an ugly syntactic restriction, i.e. K must not occur in other goals of the same clause.

5.3 Replacing the Failure Continuation

As a matter of fact, we saw (c.f. 3.3) that the user is already able to replace the failure continuation using the cut (!). But sometimes, the user needs more. A good example is the evaluation of a ! in a meta-call. You certainly can not have a clause like: call ! :- !.

which has no e ect. We want that in a goal call((Goal, !)), if Goal succeeds, all choice points on Goal would be removed, i.e. ! should replace the current failure continuation by the failure continuation of the goal call((Goal, !)).

5.3.1 De nitions

Like for the success continuation, we propose a callfc to catch the failure continuation and a cut to replace the current failure continuation. A failure continuation has type f. Types and de nitions, using our Prolog semantics, of callfc and cut are in gure 4. type callfc (f -> o) -> o. type cut f -> o.

Tg [ (callfc G)]] = :(Tg [ (G )]]     ) Tg [ (cut Z)]] = :( Z) Figure 4: Replacing the failure continuation Note that, unlike callcc, a simple get_fc may be de ned instead of . Each one may be de ned with the other one:

callfc

get_fc C :- callfc Z\ (Z = C). callfc G :- get_fc C, G C.

The functional version is better because it uses a scoped variable instead of a global one (scoped in the whole clause).

5.3.2 Example: Cut and Local Cut

We can now write a right call for cuts:

call G :- callfc Z\ ((! :- cut Z) => '$call' G).

The clause added by the implication simply locally \rede nes" the standard cut before calling the goal ('$call' stands for the caller of the kernel). Note that this solution relies on the fact that dynamic clauses are always tried before static ones. Using callfc and cut, we can also easily compile the local cut. Indeed, we just have to transform a goal (G1 -> G2) into (callfc Z\ (G1, cut Z, G2)). Such transformations are strongly used in partial evaluators [14].

5.4 Formal Equivalences

We needed a semantics to de ne our new built-ins. Now, we can use the semantics to prove equivalences on goals using these built-ins. We give here some small examples

(callcc K\ (throw K))

 true:

Tg [ callcc K n(throw K)]] = :(Tg [ (throw ( ))]]     ) = :(:( )     ) = :( ) = Tg [ true]]

A similar proof yields (callfc



.

K\ (cut K)) true (callcc K\ (Goal, throw K)) (callfc K\ (Goal, cut K)) K Goal



that does not appear in : Tg [ callcc K n(Goal; throw K)]] = :(Tg [ Goal; throw ( )]]     ) = :(Tg [ Goal]] (Tg [ throw ( )]]    )    ) = :(Tg [ Goal]] ( :( )    )    ) = :(Tg [ Goal]]  :( )    )

provided

0

0

Tg [ callfc K n(Goal; cut K)]] = :(Tg [ Goal; (cut )]]     ) = :(Tg [ Goal]] (Tg [ cut ]]    )    ) = :(Tg [ Goal]] ( :( )    )    ) = :(Tg [ Goal]]  :( )    ) 0

0

6 Concluding Remarks

Discussion There is a strong di erence between the block mechanism im-

plemented with our callcc and the catch&throw proposed by the ISO standardisation committee. In the ISO scheme, exit is done by failure and then, a copy of the exited label is necessary. So the ISO mechanism can not be used in a program like our product program (c.f. 5.2.3): the user does not want to pay the copy of the returned result and he may want to return a non-ground result. [6] uses continuations to perform \lateral" control transfers, e.g. to implement new search-strategies. In this paper we do not have described the handling of bindings but it is well known that for eciency reasons, the failure continuation must be implemented with a stack. With such an implementation, it is not possible to handle failure continuations which are not part of the failure continuation of the current success continuation. [13] argues that \he does not know problems solved with the aid of ancestral cuts (our callfc&cut) which could not be solved with the aid of the standard cut". But he also says that most of Prolog kernels \have this method internally"! We pretend that if something is useful for the compiler

designer, it is useful for any user provided that the semantics is well de ned (it is the case of callfc&cut with the continuation based semantics). As the proposed semantics for Prolog, our semantics does not take into account the assert and retract. But Prolog has no such predicates: Prolog forces another style of programming with which implication is the only means for modifying the program. We showed that the operational semantics of the implication is simple and clear. Our semantics gives a translation from Prolog programs into abstract machine instructions. This semantics has been tested using a real translator and an interpreter for the abstract machine both written in Prolog. The encoding of the translator with Prolog is straightforward using the terms [2]. Currently, we are thinking about replacement in the context. As for success and failure continuation, catches and modi cations of dynamic clauses and signature are possible. In fact, such explicit modi cations might be useful for any meta-programming application (for example, modi cation of the signature is needed in our implementation of the abstract machine with Prolog). Unlike the built-ins presented here, such modi cations can not be simply de ned using our continuation scheme. New instructions of the abstract machine would be necessary.

Conclusion Continuations are widely recognised in functional program-

ming to be powerful tools to describe semantics and design compilers. Given to the user, they allow of new schemes in programming. Their favourite uses are implementations of exception mechanisms. We have presented new built-ins to handle the two continuations of a logic programming language: callcc and throw, similar to the functional programming ones, catches and sets the success continuation. callfc and cut respectively catches the current choice-point and removes all the more recent choice-points. We chose Prolog for its higher-order syntax and its new kind of goals. To de ne these new built-ins led us to propose a semantics for Prolog including the denotation of the new formulas. We have given some examples using the new built-ins and proofs about what the built-ins actually do. We are not aware of any other proposition for explicit manipulations of the success continuation in a logic programming language (though these manipulations necessarily exist in the kernel of Prolog systems). The builtins proposed in the present paper are included and used in the PM system [3]. The error handler of the compiler written in Prolog relies on the blocks mechanism we have described in this paper.

Acknowledgement The authors wish to thank M. Ducasse, J. Schimpf and M. Wallace for their helpful comments and suggestions.

References

[1] A.W. Appel. Compiling with Continuations. Cambridge University Press, 1992. [2] P. Brisset. Compilation de Prolog. These, Universite de Rennes I, 1992. [3] P. Brisset and O. Ridoux. The Compilation of Prolog and its execution with MALI. Technical Report 687, IRISA, 1992. [4] C. Consel and S.C. Khoo. Semantics-directed generation of a Prolog compiler. In J. Maluszynski and M. Wirsing, editors, 3rd Int. Work. Programming Languages Implementation and Logic Programming, Springer-Verlag, 1991. LNCS 528. [5] M.J.C. Gordon. The Denotational Description of Programming Languages. Springer-Verlag, 1979. [6] C.T. Haynes. Logic continuations. J. Logic Programming, 4(2):157{176, June 1987. [7] G. Huet. A uni cation algorithm for typed -calculus. Theoretical Computer Science, (1):27{57, 1975. [8] ISO. PROLOG part 1, general core. 1992. Committee Draft Proposal. [9] C. Mellish and S. Hardy. Integrating Prolog in the POPLOG environment. In J.A. Campbell, editor, Implementations of Prolog, pages 147{162, Ellis Horwood, 1984. [10] D.A. Miller. Logics for Logic Programming. Tutorial, 8th Int. Conf. Logic Programming, Paris, France, 1991. [11] G. Nadathur and D.A. Miller. An overview of Prolog. In K. Bowen and R. Kowalski, editors, Symp. Logic Programming, pages 810{827, Seattle, Washington, USA, 1988. [12] T. Nicholson and N. Foo. A denotational semantics for Prolog. ACM Transactions on Programming Languages and Systems, 11(4):650{665, 1989. [13] R.A. O'Keefe. The Craft of Prolog. MIT Press, 1990. [14] S. Prestwich. The PADDY Partial Deduction System. Technical Report 6, ECRC, 1992. [15] T. Sato and H. Tamaki. Existential continuation. New Generation Computing, (6):421{438, 1989. [16] P. Tarau and M. Boyer. Elementary logic programs. In P. Deransart and J. Maluszynski, editors, 2nd Int. Work. Programming Languages Implementation and Logic Programming, Springer-Verlag, 1990. LNCS 456. [17] K. Ueda. Making exhaustive search programs deterministic: part II. In J.L. Lassez, editor, 4th Int. Conf. Logic Programming, MIT Press, Melbourne, Australia, 1987.