Implementing the Linear Logic Programming Language Lygon Michael Winiko1
James Harland2
[email protected]
[email protected]
http://www.cs.mu.oz.au/~winikoff
http://www.cs.rmit.edu.au/~jah
Abstract There has been considerable work aimed at enhancing the expressiveness of logic programming languages. To this end logics other than classical rst order logic have been considered, including intuitionistic, relevant, temporal, modal and linear logic. Girard's linear logic has formed the basis of a number of logic programming languages. These languages are successful in enhancing the expressiveness of (pure) Prolog and have been shown to provide natural solutions to problems involving concurrency, natural language processing, database processing and various resource oriented problems. One of the richer linear logic programming languages is Lygon. In this paper we investigate the implementation of Lygon. Two signi cant problems that arise are the division of resources between sub-branches of the proof and the selection of the formula to be decomposed. We present solutions to both of these problems. Keywords: Linear Logic, Logic Programming, Lygon, Implementation. This paper was published in John Lloyd, editor, International Logic Programming Symposium, pages 66-80, Portland, Oregon, December 1995. MIT Press.
1 Introduction Logic programming may be viewed as the interpretation of logical formulae as a programming language. This is done by giving an operational interpretation to the process of proof search. Traditional logic programming languages such as Prolog are based on the Horn clause subset of classical logic. Pure Prolog however, is lacking in expressiveness in a number of areas. A desire for richer and more expressive logic programming languages has led researchers in two directions. The rst direction is the use of richer subsets than Horn clauses. The second direction involves the use of logics other than classical. Some logics that have been considered as basis for logic programming languages include modal and temporal [14], relevant [4] and linear logics. This paper is concerned with the language Lygon [7, 15, 22]. Lygon is a logic programming language which is based on linear logic. By design the class of formulae used is as large as possible so as to make the language as expressive as possible. Due to its basis in linear logic Lygon's applications include resource management, state handling and concurrency. Lygon includes as a subset a number of languages including Horn clause classical logic languages (such as pure Prolog), hereditary Harrop formulae languages (such as rst order -Prolog) and some other proposals 1 Department of Computer Science, The University of Melbourne, Parkville, Melbourne, 3052, Australia. 2 Department of Computer Sciences, Royal Melbourne Institute of Technology, GPO Box 2476V, Melbourne, 3001, Australia.
for languages based on linear logic (ACL [11], Lolli [10], LinLog [2], LC [18] and Forum [13]). Implementing Lygon is nontrivial. In addition to the usual issues for logic programming languages there are a number of new ones. Two particular ones stand out. 1. In searching for a proof of a goal involving the connective we need to split the context between the two subproofs. This is done in the reduction of the
rule. Since logic programming languages search for proofs in a bottom up fashion this splitting is nondeterministic and inecient if done navely. 2. Consider searching for a proof of a multiple conclusioned goal. Each time an inference rule is to be applied we must rst select the formula that is to be reduced. This formula is the active formula. If done navely this selection process implies that any proof involving multiple conclusions has signi cantly more non-determinism than necessary. This paper discusses these two issues and presents solutions to them. The solutions presented have been incorporated in the current implementation of Lygon. The paper is organised as follows. In Section 2 we brie y discuss relevant background. Section 3 discusses at some length the deterministic allocation of resources among sub-branches and presents our solution. In Section 4 we discuss the selection of the active formula and show how the results of Andreoli [2] and Galmiche and Perrier [5] can be used to derive a solution. Section 5 presents results, both theoretical and practical. We compare our work to other relevant work - primarily Lolli related - in Section 6 and conclude and discuss further work in Section 7.
2 Background Due to space limitations we are unable to include an introduction to linear logic. The reader requiring an introduction will be well served by one of [1, 6, 12, 16].
2.1 The Sequent Calculus
The sequent calculus is a formalisation of the mathematical notion of proof. The inference rule ?0 : : : ? A ? states that from ?0 : : : ? we can infer ?. The name of the inference rule is A. A proof is a tree of inference rules where the sequent to be proved (the ?) is at the root of the tree and the leaves of the tree are instances of an axiom of some sort. These trees are depicted with the root below the leaves3. The structure of a sequent varies between logics and between dierent presentations of logics. Generally it is a sequence (hence the name) of formulae written: ` F1 ; F2 : : : F 3 . The formalisation of a logic typically contains a rule for each connective. In addition there are usually structural rules which can be applied to any formula. One inference rule that is part of nearly every logic is the interchange rule which swaps the order of two formulae in the sequent. Often this rule is elided and we think of the sequents as multisets. n
n
3
Unlike standard computer science practice
Inference rules formalising linear logic are as follows, where ? is a multiset of formulae which are not of the form ?F . a and b are arbitrary formulae. p and q are atoms and ? represents a multiset of formulae of the form ?F .
`?; 1 1 `?; ? `?; ?; ? ? `?; a; ? `?; b; ? N `?; a N b; ? `?; a; b; ? `?; a O b; ? O ` a; ?a; ?; ? `?a; ?; ? ? `?; a[y=x]; ? `?; 8xa; ? 8
`?; p; p Ax ?
`?; >; ? > `?; a; ?1 `?; b; ?2
`?; a b; ?1 ; ?2 `?; a ; ? `?; a1 a2 ; ? i 2 f1; 2g `?; a `?; !a ! `?; a[t=x]; ? `?; 9xa; ? 9 i
Where y is not free in ?; ?.
`?; a; ? Res `?; p; ?
Where the program contains the clause q a and q = p for some substitution . Our presentation is nonstandard in that we encode contraction (copying) and weakening (deletion) into other rules. Note that a formula of the form ?F is one that can be freely copied and deleted. The deletion is done by the 1 and Ax rules, the copying is done by the and ?D rules. Logic programming is seen as a form of proof search. The search is done bottomup - the process begins with a goal sequent and extends the tree upwards until all branches have been closed with axioms of some sort.
2.2 Lygon
This paper considers a subset of the full Lygon language as derived in [15]. Program clauses (D) must be of the form 8x1 : : : x (A G ) Goals (G ) are summarised by the following syntax (where A is an atom): G ::= G G j G G j G N G j G O G j !G j ?G j 9xG j 8xG j 1 j ? j > j A j A The semantics of the language is dictated by the sequent calculus inference rules with the exception that the 9 rule is implemented using uni cation and logical variables. As an example, the toggle program in Lygon looks like4 toggle (on o ) (o on ) n
?
?
4
?
Actually \real" Lygon is rendered in ASCII and would be toggle
* neg on).
. Lazy splitting for this fragment is relatively simple; the real subtlety arises when > is reintroduced. Consider the formula (b O 1) b . Clearly it is not provable. Consider now a nave formulation of lazy splitting. Instead of a sequent of the form ` ? we use the notation ? ) with the reading that the is the excess formulae (also known as the residue) being returned unused. A successful proof cannot have excess resources and hence we require that its root be of the form ? ) The standard sequent rules are modi ed as follows. The axiom rules are modi ed to return unused formulae: ?
?
?
?
p; p ; ? ) ? Ax ?
1; ? ) ? 1
Note that lazy splitting does not aect formulae of the form ?F since these are not split by our version of the rule. We elide these formulae from the rules below to
clarify the presentation. The unary logical rules are modi ed to pass on returned formulae a; b; ? ) a O b; ? ) O Finally, the rule passes the excess from the left sub-branch into the right subbranch. a; ? ) b; )
a b; ? ) Using these rules we nd however that (b O 1) b is derivable! ?
1; b ) b 1O 1 O b ) b b; b ) Ax
(1 O b) b ) ?
?
This problem arises since the axiom rules return any and all formulae. For lazy splitting to be valid we must respect a notion of scope; a formula may only be returned if it was present in the conclusion of the rule. In the unsound derivation above b should not be returned by the 1 rule since it was introduced above the
rule. Another characterisation is that the rule can't split what it doesn't have. To prevent this problem we must keep track of which formulae are returnable. We do this by tagging a returnable formula with a superscript >. Untagged formulae must be consumed in the current branch of the proof. Tagged formulae may be returned from the current branch. The marking may be thought of as a \proof of purchase" without which we cannot return the resource. Note that we have to allow nestable tags to handle nested . The rule adds and removes a single tag. The revised axiom rules can only pass on unused formulae if they are tagged.
1; ? ) ? 1
p; p ; ? ) ? Ax ?
The revised rule
>
>
>
>
a; ? ) b; )
a b; ? ) >
>
marks all existing formulae as returnable and then passes them to the rst subbranch. Note that the formulae returned from the rst sub-branch must have their tags removed before being passed to the second sub-branch. This ensures that formulae which are untagged in the conclusion of the rule cannot be returned unused from the right premise and hence from the conclusion. As an example consider the formula (b O (1 1)) b which is clearly unprovable. If we neglect to have our lazy rule strip away the tags before passing formulae to the right premise we lose soundness since the above formula has a derivation: ?
1 b ;1 ) b 1 b ;1 ) b
.. b; 1 1 ) b O .. b ;b ) b O (1 1) ) b
(b O (1 1)) b ) We now introduce an additional rule. The Use rule claims a formulae for use in >
>
>
>
>
>
>
?
?
the current sub-branch by stripping o the tags that allow it to be returned unused.
We de ne a to remove all tags on a. a ;? ) a; ? ) Use 0
0
We have seen the rules for , O and Ax. The rules for , ? , ? , 8 , 9 are similar to O. The remaining rules are ! and N. When we apply ! we must ensure that there are no other (linear) formulae. Thus we force all excess formulae to be returned. In a way ! is similar to the Ax and 1 rules. a) ! !a; ? ) ? The other rule which is aected by the lazy splitting mechanism is the N rule. a; ? ) 1 b; ? ) 2 N a N b; ? ) >
>
This rule has the constraint that 1 = 2 = . This enforces the requirement in the non-lazy N rule that the two sub-branches have the same context. Note that we prefer to have an explicit constraint since this constraint will need to be subsequently modi ed when > is reintroduced. Looking at a complete proof and seeing sequents of the form a ; b ; a ) b one is often left with the feeling that there is still some magic at work. This is not so; when a proof is being constructed bottom up, the right hand side of the arrow ()) is left unbound on the way up and determined at the leaves. The rules presented so far form a sound and complete collection of inference rules for the fragment of the logic excluding >. These rules manage resources deterministically. The lazy splitting version of > involves a signi cant amount of subtlety and has implications for the N rule which becomes rather complex. Consider the non-lazy inference .. .. > ` >; ?1 ` G; ?2
` > G;. ?1 ; ?2 .. . ?
>
>
>
The rule for > simply consumes all formulae. Consider now the lazy splitting version of the above inference .. .. > G; ?2 )
>; ?1 ; ?2 ) ?2 > G; ?. 1 ; ?2 ) .. . >
>
>
The application of the > rule must somehow know which formulae are not to be consumed since they will be required later in the proof. The lazy splitting version of the > rule has to divide formulae between > and the rest of the proof. This can be done in a number of ways which is exponential in the number of formulae.
Although this sounds similar to the problem in the original rule there are two dierences. Firstly, it is not possible to nest >s so a single non-nestable tag will suce. Secondly and more signi cantly, the direction is opposite { the rst sub-branch of the rule returns formulae which must be consumed by the second sub-branch. The > rule on the other hand will consume all formulae by tagging them appropriately then passing them on. The formulae passed on have been consumed by > but { in case they should not have been { can be \unconsumed". We shall use a pre x > (\maybe") to tag formulae which have been consumed by > and which can be \unconsumed" if necessary. Note that only returnable formulae are thus treated. Formulae which are not returnable are summarily consumed by the > rule. > >; ; ? ) (>?) One might be tempted to de ne > in terms of existing connectives, viz. >F F ?. There is however a subtle but important dierence between the two. >F represents a formula which may have been consumed by >, thus F is either consumed | in which case it must not be used anywhere | or unconsumed | in which case it must be consistently accounted for. F ? on the other hand allows for some parts of the proof to choose F and other parts to choose ?. Consider for example ((> (b N 1)) O b ) which is not provable in the standard system; however if we use b ? in place of >b it has a derivation: >
>
?
1
1) b; b ) Ax 1; ? ) ? > b; b ? ) 1; b ? ) N >; b ) (b ?) b N 1; b ? )
> (b N 1); b ) O (> (b N 1)) O b ) In addition to pointing out the dierence between >F and F ? this suggests ?
?
?
>
?
?
>
?
?
?
that the > rule cannot be simply added to the system derived so far since maintaining the consistent usage of formulae of the form >F requires a measure of global information. Thus, rather than having the axiom rules delete consumed formulae which have turned out to be unneeded (Ax ) 0
p; p ; >; ? ) ? Ax
0
?
>
>
we must return the consumed formulae which were unneeded, so that we can check that dierent branches agree on which consumed formulae have had to be unconsumed (Ax). p; p ; >; ? ) ? ; > Ax If we don't return the unused formulae then we lose soundness, and we can derive unprovable formulae such as a; b; > (a N b ) ?
>
>
?
?
a; >b; a ) Ax >a; b; b ) Ax Use >a; >b; b ) Use > a; > b; a ) > N a ; b ; > ) (>a) ; (>b) >a; >b; a N b )
a; b; > (a N b ) ) 0
0
?
?
?
>
>
>
>
?
?
?
?
?
We thus use Ax rather than Ax . In order to do this we must modify the unary rules to pass on the returned formulae of the form >F . We also need to have a look at the two binary rules | and N. The rule does the usual passing from the left sub-branch to the right. The only interesting point is that it is now possible for the left sub-branch to return formulae which do not have a ? tag. These formulae (which are of the form >F ) must be returned by the conclusion without being passed to the right sub-branch. If this is not done soundness is compromised. Consider as an example the unprovable formulae ((> 1) O a) a . If we use a lazy rule which passes formulae of the form >F into the right premise then there is a derivation of this formula: 0
>
?
> >; a ) (>a) 1; >a ) >a 1
a; a ) Ax > 1; a ) >a O Use (> 1) O a ) >a >a; a )
((> 1) O a) a ) We now turn to the N rule. Consider formulae of the form >F . Unlike formulae >
>
?
?
?
of the form F these cannot be returned from the current branch for use elsewhere. Formulae of the form >F can only be propagated downwards. The only rule which makes use of them (other than > is the N rule. The N rule uses these formula to enforce consistent consumption and undeletion between its two premises. We now investigate the details of this mechanism. We begin by noting that the residue in the two premises of a N rule need not necessarily match. In order to be able to detect when a mismatch is invalid we de ne the notion of a >-like (sub)proof. The rules of the system are modi ed to track whether (sub-)proofs are >like. This information is used by the N rule when enforcing consistent consumption of formulae between its premises. Despite the tagged input being the same in both premises it is possible for the residue to be dierent: >
a ; a ) Ax > a ; a ) Use >; a ) (>a) N a N >; a )?? This sub-proof occurs in the proof of ((a N >) 1) O a (which for a change is ?
?
>
>
?
>
>
?
actually provable) and hence had better succeed. The intuition is that the >a represents a formula which has been consumed but which perhaps should not have been. That the left premise of the N rule does not contain >a indicates that the a must have been consumed in the left premise. That the right premise of the N does contain >a indicates that the right sub-proof is prepared to undelete the a which it has consumed. In order to be consistent with the left sub-proof this must not be allowed to happen. Thus the proof is valid if we prevent the a from being unconsumed elsewhere in the proof. We de ne a (sub-)proof to be >-like if adding formulae to its sequents yields a valid proof. Although we would normally require that the residue of the premises of a N rule must agree, if a premise is >-like we can allow it to have extra residue. This extra residue represents consumed formulae. By not passing them on to the conclusion of the N rule we prevent them from being unconsumed, thus preserving soundness. Note that a N inference may have to be failed if its premises have incompatible residues and are not >-like.
In order to be able to apply this check we need to track whether sub-proofs are
>-like. We add a tag to each sequent to indicate whether it is >-like. =true if the proof is >-like and =false if it isn't. The > rule is given =true, the Ax and 1 rules are =false. The unary rules pass on the tag. A rule's conclusion is >-like if at
least one of its premises is. So, if the tags on the two premises are =x and =y the tag on the conclusion is =x _ y. The conclusion of a N inference is >-like if both premises are. So, if the tags on the two premises are =x and =y the tag on the conclusion is =x ^ y. The previous proof is valid since the right premise would be labeled with =true. The returned formulae of the N rule's conclusion are calculated by taking the intersection of the returned formulae of the two premises. The proof fails if either premise has excess residue (formulae which are not in the intersection) and is tagged with =false. In the previous proof \??" would be empty (the intersection of an empty residue and >a ). One minor wrinkle is that while the intersection must agree on the formulae it may not agree on their tags. Speci cally, the formulae in one premise may have a > tag in addition to some ? tags whereas those in the other premise may not have the > tag; what value should we give to X in the proof: >
>
> a ; > ) (>a) a ;1 ) a 1 N a ;>N1 ) X >
>
>
>
>
The solution is to pass on the > tag if both premises have it and to strip it o if only one of the premises has it. The rules derived form a solution to the problem of splitting the context between the two premises of a inference. The rules work in a multiple-conclusioned setting which is a general case of the single-conclusion one. The rules are summarised in appendix A. An expanded form of this section can be found in [20].
4 Selecting the Active Formula Lygon is based on multiple conclusion linear logic. Each inference step selects a formula and applies the appropriate inference rule bottom up. When designing a language based on a multiple conclusion logic one can at design time impose the constraint that selecting the active formula be done using \don't care" nondeterminism. Doing this yields a more limited class of formulae but simpli es the implementation. This path was taken in the design of LC [18] and LO [3]. In the design of Lygon the opposite choice was made. As a result Lygon has a signi cantly larger class of formulae but has to contend with the problem that selecting the active formulae may have to be done using \don't know" nondeterminism. Although in general selecting the active formulae in Lygon may require backtracking, in certain situations we can safely commit to the selection. An investigation of when Lygon can safely use \don't care" nondeterminism to select the active formula involves an investigation of the permutability properties of linear logic. Investigations of these properties have been done by Andreoli [2] and Galmiche and Perrier [5]. Note that there is a fair amount of overlap in the results of the two papers although their motivation is dierent.
A (partial) solution to the problem in Lygon of selecting the active formula can be obtained by simply applying the observations of these papers. This section brie y summarises the results of the two papers. For more details we refer the reader to the papers themselves. In [2] Andreoli identi es two classes of connectives { synchronous and asynchronous. The latter can be permuted down in a proof and hence can be selected using \don't care" nondeterminism. More formally, if there is a proof of a sequent containing a formula whose topmost connective is asynchronous then there is a proof of the sequent where that formula is the active formula. As an example, consider the proof of (a(1) O ?); 9xa(x). Since O is asynchronous there exists a proof where the rst inference applied is O: ?
Ax
Ax
` a(1) ; a(1) ` a(1) ; a(1) ? 9 ` a(1) ; ?; a(1) O ` a(1) ; 9xa(x) ? ` a(1) O ?; a(1) ` a(1) ; ?; 9xa(x) O 9 ` a(1) O ?; 9xa(x) ` a(1) O ?; 9xa(x) The synchronous connectives are 1 , 0 , , , ! and 9. The asynchronous connectives are > , ? , O , N , ? and 8. ?
?
?
?
?
?
?
?
A further observation made in [2] is that if we have selected a synchronous formula and its subformulae have a synchronous connective topmost then we can automatically select the subformulae. This process of continuing to select synchronous subformulae is known as focusing. In [5] Galmiche and Perrier systematically analyse the permutability properties of linear logic. In addition to the properties noted they also observe that if the sequent is of the form !F; ? then we can commit to the !F as the active formula. These observations (which are incorporated in the current version of Lygon) yield a signi cant reduction in the nondeterminism associated with selecting the active formula.
5 Summary of Results 5.1 Formal Results
For the next two theorems @ is a multiset of formulae of the form >F .
Theorem 1 (Soundness) If ? ) @ is provable in the lazy-splitting sequent calculus then there exists a proof of ` ? in the standard sequent calculus. Proof: Omitted due to lack of space (see [20, Theorem 4.14]) Theorem 2 (Completeness) If ` ? is provable in the standard sequent calculus then for some @ there exists a proof of ? ) @ in the lazy-splitting sequent calculus. Proof: Omitted due to lack of space (see [20, Theorem 5.12]) Theorem 3 Given a proof in the standard sequent calculus there is a proof in the lazy-splitting sequent calculus which has the same structure upto the insertion of
Use rules immediately before a formula is reduced.
Proof: An algorithm which maps proofs in the standard sequent calculus to proofs in the lazy-splitting sequent calculus is presented in [20, Algorithm 2]. It is easy to verify that the lazy-splitting proof produced conserves the structure of the proof.
Start 1 2 3 4 5
A 0.166 1.518 4.218 39.45 3.796 5.830
B 0.175 1.166 2.228 4.536 2.960 5.368
C 0.168 0.551 0.410 2.268 0.781 0.576
D 0.166 0.496 0.301 0.440 0.676 0.658
Figure 1: Benchmark Results
Theorem 4 If there is a proof in the lazy-splitting sequent calculus then a proof search incorporating the observations of [2, 5] will nd it. Proof: The theorems in [2, 5] state that if there is a proof then there is a proof which is \normal" { that is it satis es the optimisations summarised in section 4. According the theorem 3 there exists a corresponding lazy-splitting proof where the Use rule is only applied to a formula immediately before its reduction.
5.2 The Implementation
The Lygon implementation (version 0.4) is written in BinProlog5. It consists of 505 lines of code6 for the non-debugging version and 913 lines of code for the version including the debugger. In order to determine whether the optimisations presented do represent improvements despite a certain amount of administrative overhead we have run a few short benchmarks. The standard Lygon interpreter was modi ed to remove the optimisations resulting in four versions: Lazy Splitting Active Formula Selection
LygonA LygonB LygonC LygonD no no yes yes no yes no yes
Brie y, the rst benchmark nds all paths between two given nodes in a cyclic graph and the second is an arti cial test of splitting. The third benchmark demonstrates the use of the > connective in simulating \ane" predicates which can be used at most once. The fourth benchmark nds all solutions for an instance of a modi ed binpacking problem [7] and the nal benchmark simulates a Petri net. Each of the benchmarks was run on all four versions of Lygon. The timings are given in gure 1. All times are the average of ten runs on an idle DEC Alpha. Times are given in seconds and were measured by BinProlog. The source code for the benchmarks is available at http://www.cs.mu.oz.au/~winikoff/papers/bench/index.html
In gure 1 the row labeled \Start" is the (CPU) time taken to start the appropriate Lygon interpreter. Looking at the results we can see that the lazy splitting mechanism indeed represents a signi cant improvement. The dierence in running time between LygonB 5 6
Version 2.20 including comments and whitespace
and LygonD ranges between a factor of three and a factor of fteen - more than an order of magnitude! The optimisations discussed in section 4 in general give a modest increase in performance. However, there are exceptions - the ane benchmark shows a signi cant speedup and the Petri net benchmark shows a slowdown. This variance is due to two factors. Firstly, the implementation represents the goal as a (Prolog) list of formulae. Additionally, the selection of the active formula is done entirely at run time. It is possible to do at least some of the selection process at compile time. For example once the formula a ((b N (d O e)) f ) is selected as the active formula it can be completely processed without any further selection. This is due to an application of focusing and the immediate processing of asynchronous connectives. The slowdown is simply the result of the overhead. Secondly, the bene t gained depends to a very large extent on the goal. Some goals - for example one which does not make use of O - gain no bene t from intelligent selection of the active formula whereas others can gain a signi cant bene t. An example of the latter are formulae of the form (1 O ? O : : : O ?) print(x) fail where there are n occurrences of ?. The intelligent selection of the active formulae will nd exactly one solution and do so rapidly. The versions of Lygon using the nave selection mechanism (LygonA and LygonC) will nd a rather large number of solutions7 .
6 Comparison with Other Work The notion of lazy splitting was rst introduced in [9], and a lazy method of nding goal-directed proofs (known in [9] as the input-output model of resource consumption) is given. An extension of this system presented in Hodas' thesis [8] handles the > rule lazily. Lolli is based on a single conclusion logic whereas Lygon is based on a multiple conclusion logic. Thus (full) Lygon is a generalisation of Lolli. It is actually possible to encode Lygon into Lolli by introducing a new constant and using resolution to select the active formula. The goal a O b is encoded as ((a n) (b n)) n where n is the new constant. This translation is reminiscent of the double negation translation of classical into intuitionistic logic. We feel, however, that a direct approach is more desirable for a number of reasons. Firstly our presentation allowed us to use the permutability properties explored in [5, 2] to reduce the nondeterminism associated with selecting the active formula. In the Lolli encoding selecting the next formula to be reduced is done by the resolution rule. Reducing the nondeterminism under the Lolli encoding would require modifying the resolution rule to examine the body of each clause and decide accordingly whether the clause could be committed to. While this could be done it would seem to involve signi cantly more work than is required for the non-encoded presentation. Note furthermore that our solution handles the Lolli language as a simple special case in which all sequents just happen to have a single goal formula. Secondly, the application of logical equivalences contains some subtleties in the context of proof theoretic arguments. The relationship between goal directed (ie. 7 (2n?1 ) (n ? 1)!
(
( (
operational) equivalence and logical equivalence is nontrivial. The logical equivalence of two formulae only implies operational equivalence if the two formulae are within the appropriate subset. For example, the two formulae (a b) (a b) and (a (a b)) N (b (a b)) are logically equivalent however the rst is not a valid goal formula and does not have a goal directed proof even though the second does. For more on this sort of issue and on the general problem of designing logic programming languages see [21]. LinLog incorporates most of the optimisations noted in section 4. These optimisations are also applicable to Forum.
(
(
(
7 Conclusions and Further Work We have shown how to eliminate the nondeterminism associated with resource allocation in Lygon. We have also shown how to apply known permutability results in order to reduce the nondeterminism associated with selecting the active formula. Both these optimisations are incorporated in the Lygon interpreter8 Since both of these sources of nondeterminism are exponential avoiding them is essential for a non-toy implementation. Measurements con rm that these optimisations are signi cant. Whilst the method presented for splitting resources between sub-branches is optimal (in that the and > rules are deterministic) there is scope for further investigation regarding the selection of the active formula. Possibilities to be explored include a global analysis and having the user supply information. The methods reported in this paper are applicable to linear logic programming languages other than Lygon. In particular they have an impact on the implementation of Forum and LinLog. In addition the lazy splitting rules have application in theorem provers for linear logic [17], where they eliminate a signi cant potential ineciency. Other obvious areas of work include investigation into the compilation of Lygon and relevant analyses and user speci ed information such as types and modes. Non-implementation related work includes investigations into natural applications for Lygon. These so far include graph algorithms, concurrent problems, AI search problems involving states and transitions, speci cations and a variety of database related applications including the modelling of transactions and deductive and active databases. In addition there are a number of more theoretical issues that are raised, some of which relate to certain application. These include the nature of negation as failure in a linear logic context and bottom up evaluation of Lygon.
Acknowledgments We would like to thank David Pym for interesting discussions. We would like to thank an anonymous referee for pointing us to Hodas' thesis and for suggesting the encoding of Lygon in Lolli. Michael Winiko is supported by an Australian Postgraduate Award (APA) scholarship. 8
Available from [19].
References
[1] V. Alexiev. Applications of linear logic to computation: An overview. Bulletin of the IGPL, 2(1):77{107, March 1994. [2] J.-M. Andreoli. Logic programming with focusing proofs in linear logic. Journal of Logic and Computation, 2(3), 1992. [3] J.-M. Andreoli and R. Pareschi. LO and behold! concurrent structured processes. SIGPLAN Notices, 25(10):44{56, 1990. [4] A. W. Bollen. Relevant logic programming. Journal of Automated Reasoning, 7:563{ 585, 1991. [5] D. Galmiche and G. Perrier. On proof normalization in linear logic. Theoretical Computer Science, 135(1):67{110, December 1994. [6] J.-Y. Girard. Linear logic. Theoretical Computer Science, 50:1{102, 1987. [7] J. Harland and D. Pym. A note on the implementation and applications of linear logic programming languages. In G. Gupta, editor, Seventeeth Annual Computer Science Conference, pages 647{658, 1994. [8] J. Hodas. Logic Programming in Intuitionistic Linear Logic: Theory, Design and Implementation. PhD thesis, University of Pennsylvania, 1994. [9] J. S. Hodas and D. Miller. Logic programming in a fragment of intuitionistic linear logic (extended abstract). In Logic in Computer Science, pages 32{42. IEEE, 1991. [10] J. S. Hodas and D. Miller. Logic programming in a fragment of intuitionistic logic. Journal of Information and Computation, 10(2):327{365, 1994. [11] N. Kobayashi and A. Yonezawa. ACL { a concurrent linear logic programming paradigm. In International Logic Programming Symposium, pages 279{294, 1993. [12] Y. Lafont. Introduction to linear logic. Lecture Notes for the Summer School in Constructive Logics and Category Theory, August 1988. [13] D. Miller. A multiple-conclusion meta-logic. In Logic in Computer Science, pages 272{281, 1994. [14] M. A. Orgun and W. Ma. An overview of temporal and modal logic programming. In D. M. Gabbay and H. J. Ohlbach, editors, First International Conference on Temporal Logic, pages 445{479. Springer-Verlag, July 1994. [15] D. Pym and J. Harland. A uniform proof-theoretic investigation of linear logic programming. Journal of Logic and Computation, 4(2):175{207, April 1994. [16] A. Scedrov. A brief guide to linear logic. Bulletin of the European Association for Theoretical Computer Science, 41:154{165, June 1990. [17] T. Tammet. Proof search strategies in linear logic. Programming Methodology Group 70, University of Goteborg and Chalmers University of Technology, March 1993. [18] P. Volpe. Concurrent logic programming as uniform linear proofs. In G. Levi and M. Rodrguez-Artalejo, editors, Algebraic and Logic Programming, pages 133{149. Springer-Verlag, September 1994. [19] M. Winiko. Lygon home page. http://www.cs.mu.oz.au/~winikoff/lygon/lygon.html, 1995. [20] M. Winiko and J. Harland. Deterministic resource management for the linear logic programming language Lygon. Technical Report 94/23, Melbourne University, 1994. [21] M. Winiko and J. Harland. Characterising logic programming languages. Technical Report 95/26, Melbourne University, 1995. [22] M. Winiko and J. Harland. Implementation and development issues for the linear logic programming language Lygon. In Australasian Computer Science Conference, pages 563{572, February 1995.
A The Lazy Splitting System
The symbols , and represent multisets of formulae of the form ? . The symbols @ (aleph) , i (beth), j (gimel) and k (daleth) represent multisets of formulae of the form >?. The symbols , , ? , and represent multisets of formulae that are not in one of the two prior forms. a and b represent any formulae. p represents an atom. The notation a n represents the formula where a is superscripted by n >s. c represents a tagged formulae. The function f used in the > rule is given by >
>
f A = (f A) f >A = >A f x = >x >
>
The : is a nonlinear region where formulae of the form ?F are kept. We de ne ? to remove all tags. Thus (>a) = a. mintag removes the > tag if exactly one of the formulae has a > tag. Otherwise it leaves the formula unchanged. The N rule has the side conditions that (x = true) _ (k = 1 = ;) and (y = true) _ (j = 2 = ;) and = . : p; p ; ; i ) ; i=false Ax : 1; ; i ) ; i=false 1 >
0
0
0
0
?
: ?; ; i ) ; @=x : ?; ?; ; i ) ; @=x ?
: >; ?; ; i ) (f ); i=true > : a; ?; ; i ) ; @=x : a b; ?; ; i ) ; @=x 1 : a ) @=x :!a; ; i ) ; i=false ! : a[t=x]; ?; ; i ) ; @=x : 9xa; ?; ; i ) ; @=x 9
: b; ?; ; i ) ; @=x : a b; ?; ; i ) ; @=x 2 : a; b; ?; ; i ) ; @=x : a O b; ?; ; i ) ; @=x O : a[y=x]; ?; ; i ) ; @=x : 8xa; ?; ; i ) ; @=x 8 Where y is not free in ?
; a : ?; ; i ) ; @=x :?a; ?; ; i ) ; @=x ?
a; : a; ?; ; i ) ; @=x a; : ?; ; i ) ; @=x ?D : c ; ?; ; i ) ; @=x : c; ?; ; i ) ; @=x Use 0
: a; ? ; ; i ) ; ; j; k =x : b; ; ; k ) ; @=y
: a b; ?; ; i ) ; j; @=x _ y >
>
>
>
>
>
: a; ?; ; i ) ; 1 ; @; k=x : b; ?; ; i ) ; 2 ; @; j=y N : a N b; ?; ; i ) mintag(; ); @=x ^ y