Constraint Based Proof Construction in Non-Commutative Logic
Jean-Marc Andreoli, Roberto Maieli and Paul Ruet CNRS - Institut de Mathematiques de Luminy 163 avenue de Luminy, Case 907, 13288 Marseille Cedex 9 (France) andreoli,maieli,
[email protected] July 13, 2001
Abstract This work presents a computational interpretation of the construction process for cyclic (CyLL) and noncommutative (NL) sequential proofs. We assume a proof construction paradigm, based on a normalization procedure, known as focussing which manages eÆciently the non-determinism of the construction. Similarly to the linear case, a new formulation of focussing for NL is used to introduce a general constraintbased technique in order to deal with partial information during proof construction. In particular, the procedure develops through construction steps building \intermediate objects" called abstract proofs, similar to the designs of ludics. 1
Introduction
We are interested here in the computational paradigm of proof construction in logical sequent calculi. The most straightforward (and naive) proof construction algorithm starts with a single open node labeled by a given sequent (or even a single formula), then tries to incrementally construct a proof by repeatedly expanding each open node, selecting and applying an inference from the sequent calculus, thus possibly introducing new open nodes. This provides an interesting computational model, particularly adapted to capture non-deterministic processes, since proof construction itself is intrinsically non-deterministic: in the naive procedure, for example, many choices of dierent kinds (choice of the principal formula, choice of the inference) have to be made at each expansion step. However, it is well-known that, due to intrinsic permutability properties of the inferences in sequent calculi, some strategy is needed in order to avoid making sets of choices which lead to the same object (modulo permutation of inferences). Such a strategy, called focussing, has been proposed in [2]. It is based on the generic concept of polarity of formulas, and therefore applies to any logical system where connectives have polarities, such as linear logic or non-commutative logic [1, 14]. Focussing deals with two important forms of irrelevant non-determinism in proof construction: on one hand, the instant of the decomposition of a negative connective is simply irrelevant; on the other hand, the interval between two decompositions of positive connectives, if they belong to the same formula and one is an immediate successor of the other in the formula, is irrelevant. The strategy to avoid these forms of irrelevant non determinism can be expressed in the sequent calculi themselves by modifying the syntax of the sequents, with the introduction of a distinguished formula called the \focus". Such focussing sequent calculi have thus been proposed in various contexts for linear logic [9] and in a non-commutative framework [12, 13]. There is also an alternative presentation which does not rely on syntactic conventions while capturing exactly the same content: keep sequents as simple as possible (eg. they are made of atoms only), but re ne the inferences of the calculus themselves. Such a re ned, simpli ed calculus, called the (focussing) bipolar calculus, has been presented in [3] for linear logic. It is strictly isomorphic to the focussing sequent calculus of linear logic, so proof construction can be performed equivalently in both systems. We extend here the focussing bipolar calculus to non-commutative logic, without problem given the genericity of the approach. Now, the naive proof construction procedure can be directly applied to the focussing bipolar calculus, but this is still unsatisfactory. Indeed, although focussing eliminates a lot of irrelevant non-determinism, there still remains sources of non-determinism which are untractable in the naive approach. The most well-known one appears in the rst-order case with the existential quanti cation rule [10]: a choice of a term to assign to the quanti ed variable has to be made, but there are in nitely many possibilities. A random enumeration is clearly unsuitable, essentially because two dierent random choices may lead to essentially the same object, diering only by a term which, anyway, is irrelevant to the proof. For example, here are two proofs of the formula 8x p(x) ( 9x p(x) which dier by a purely irrelevant choice (of the term to assign to x): 1
.
.
` p(a); p(a)? ` 9x p(x); p(a)? 9 ` 9x p(x); 9x p(x)? 9
` p(b); p(b)? ` 9x p(x); p(b)? 9 ` 9x p(x); 9x p(x)? 9
Again, we need a strategy to avoid such irrelevant non-determinism. In the case of the existential quanti cation, the solution is quite familiar: it is based on uni cation, which performs the choice of the term for an existentially quanti ed variable in a lazy way, making actual choices only when they are needed somewhere in the proof. This leads the construction procedure to manipulate not ground proofs, but abstract proofs containing only partial information about the object being constructed (partiality derives from the presence of uninstantiated variables occurring in the terms). This also means that, at each step, some constraints on the dierent variables have to be propagated across the proof, when variables are shared between multiple nodes: this is the role of the uni cation process. We thus obtain a slightly less naive construction process which intertwins the usual naive expansion process and the \dual" uni cation process which works in the reverse direction. There is another case of irrelevant non determinism which the naive procedure cannot avoid, namely caused by the positive multiplicative connectives: to decompose such a connective, a choice of splitting of the context has to be performed. If we start the construction with a single formula, this choice always involves only nite alternatives. But if we start with a sequent which is itself partially de ned (not only the atoms it contains may be partially instantiated, but they may not all be known), the number of alternatives may become in nite. And in both cases, as before, a random enumeration is inadequate. A solution based on lazy splitting, and capable of dealing with partially de ned (or even, possibly, completely unde ned) sequents, has been proposed, for the focussing bipolar calculus of linear logic in [3]. It couples the naive construction procedure with a non-deterministic but nite constraint propagation procedure, thus leading to a constraint based construction mechanism. We study here the extension of this mechanism to non-commutative logic. Interestingly, it is shown that it extends straightforwardly to cyclic logic [15], but not directly to non-commutative logic, although it is an extension of both linear and cyclic logic. We then propose a new solution, adapted to the full non-commutative case. The overall goal of the line of research which this paper contributes to is two-fold: on the one hand, de ne a proof construction mechanism capable of dealing with partial information, on the other hand eliminate sources of irrelevant non-determinism in the construction process. Both aspects are important in modeling real situations, esp. in the context of widely distributed applications, such as Internet applications, where there is no central locus with a vision of the whole execution (hence the importance of dealing with partial information), and programs are strongly in uenced by their environment, which, at any reasonable level of abstraction, behaves in a highly non-deterministic way. The proof construction paradigm provides an appropriate level of abstraction to understand coordination issues in such applications, as shown by the CLF system [4], a component-based middleware infrastructure built around the central notion of resource (linear logic) and coordination viewed as resource manipulation (proof construction). However, CLF exploits only a very limited fragment of linear logic. The constraint based proof construction mechanism, developed here in the non-commutative case, oers perspectives for further extensions of the CLF platform in particular, and for a better understanding of coordination issues in distributed applications in general. 2
Non-commutative Logic
2.1 Order varieties
An order variety [1, 14] on a given set X is a ternary relation which is:
8x; y; z 2 X; (x; y; z ) ) (y; z; x) cyclic 8x; y 2 X; :(x; x; y) anti-re exive 8x; y; z; t 2 X; (x; y; z ) and (z; t; x) ) (y; z; t) transitive 8x; y; z; t 2 X; (x; y; z ) ) (t; y; z ) or (x; t; z ) or (x; y; t) spreading For instance, any oriented cycle G = (a ! ! an ! a ) induces a total order variety r(G) on the set of its 1
1
vertices by: r(G)(x; y; z ) i y is between x and z in G; this order variety is denoted by a1 :a2 : : : : :an = a2 : : : : :an :a1 , etc. Given a set X and a partial order !, we denote by X:! or !:X the cyclic closure of X !, which is clearly an order variety on X ] j!j. The spreading condition enables to systematically give \presentations" of order varieties as orders in a reversible way, whence the name. The correspondence is as follows. Given an order variety on X and x 2 X , we may de ne an order ()x on X n fxg by: ()x (y; z )
i 2
(x; y; z )
Conversely, given a partial order ! and z 2 X , let !(x; yjz ) denote the following ternary relation:
!(x; yjz )
i and
(!(x; z ) , !(y; z )) (!(z; x) , !(z; y))
expressing that z is in the same relation with x and y in !; then we may de ne an order variety ! on X , the closure of !, by:
!(x; y; z )
!(x; y) !(y; z ) !(z; x)
i
!(x; yjz ) !(y; z jx) !(z; xjy)
and and and
or or
It is shown in [1] that ! is indeed an order variety. When ! = , we say that ! presents . Given two orders ! and , we may de ne the following orders on the disjoint union j!j + j j of their supports:
! < = ! + + j!j j j !k = ! +
series sum parallel sum
One proves easily that the closure identi es series and parallel sums:
! The above order variety is denoted ! and called the gluing of ! and . It enjoys:
! = ! + !:j j + j!j: + The two processes of xing a point in an order variety and gluing orders are related by the following equations: ()x x = and (! x)x = ! for an order variety on a set X , x 2 X and ! a order on X nfxg. These equations state that the species of order varieties in the sense of Joyal [7] has derivative the species of partial orders.
2.2 Series-parallel order varieties
Series-parallel orders are those obtained from the unique orders on singletons (the empty relation ; with support fxg) by series and parallel sums. For a more substantial survey, see [11]. Series-parallel order varieties are precisely those order varieties which can be presented by a series-parallel order. A series-parallel order variety on a set X can be represented by a rootless planar tree (or seaweed, or alg ) with leaves labeled by elements of X and ternary nodes labeled by or k: take an arbitrary presentation of as a series-parallel order !, write ! as a | non-unique (associativity, commutativity) | planar binary tree t with leaves labeled by elements of X , and root and nodes labeled by < or k for series and parallel sum respectively; then remove the root of t.
k
j M 1
2
1
2
6
1
& M2 j 8xM j ? a j a
A bipole is a formula built from the monopoles, possibly pre xed with the modality !, and the positive atoms, using the positive conectives. B = 1 j B1 B2 j B1 B2 j 0 j B1 B2 j 9xB j ! M j M j a?
Furthermore, it is assumed that each bipole contains at least one positive atom. The focussing bipolar sequent calculus of Non Commutative Logic is de ned as follows: A sequent (resp. pre-sequent) is a pair : (resp. : !) where ; ; ! are, respectively, a set, an order variety and an order on negative atoms. The set of all the pre-sequents (resp. sequents) is denoted S (resp. S ). An inference (resp. pre-inference) is a pair where is a nite set of sequents (premisses) and the conclusion is a sequent (resp. pre-sequent). The set of all the pre-inferences (resp. inferences) is denoted I (resp. I ).
For each bipole F , we de ne (below) a set kF k of inferences. The focussing bipolar sequent calculus is de ned by the inferences in the union of the sets kF k where F ranges over bipoles. For each bipole F , the set kF k contains all the inferences which capture in a single step the complete decomposition of F in the context of the conclusion of the inference. This operation can be de ned in two stages which correspond to the decomposition of each of the layers of the bipole. For purpose of clarity, in the sequel, we will not consider the ( rst-order) quanti cation, although all the results can be easily adapted to this case. De nition 2 For any bipole F , the set kF k is de ned in four steps: Each connective is interpreted as an operator on sets of pre-sequents and pre-inferences
?_ : }(S ) 1_ : }(I )
P_ ; r_ : }(S ) }(S ) 7! }(S )
_ ; _ : }(I ) }(I ) 7! }(I )
?_ : }(S ) 7! }(S ) _! : }(I ) 7! }(I )
?_ = f : g
S1 P_ S2 = f1 ; 2 : !1 k !2 = i : !i 2 Si ; i = 1; 2g S1 r_ S2 = f1 ; 2 : !1 < !2 = i : !i 2 Si ; i = 1; 2g
?_ S = f; j!j : = : ! 2 S g
. 1_ = f :
1 1
A mapping M
1 [ 2 = i i : i 1 ; 2 : 1 k 2 1 [ 2 = i 2 = f 1 ; 2 : 1 < 2 i : i
g I _ I I _ I
7! M
2
=f
2 Ii ; i = 1; 2g 2 Ii ; i = 1; 2g
_!(I ) = f :
2 Ig
from monopoles into the set of pre-sequents }(S ) is inductively de ned as follows: ? = ?_ (M1 P M2 ) = M1 P_ M2 (? a) = ?(_ a ) (M1 rM2 ) = M1 r_ M2 > = ; (M1 & M2 ) = M1 [ M2 a = f : ag
where a is a negative atom and M1 ; M2 are monopoles.
A mapping B 7! B + from bipoles into the set of pre-inferences }(I ) is inductively de ned as follows: 1+ = 1_ (B1 B2 )+ = B1+ _ B2+ (! M )+ = _!(M + ) (B1 B2 )+ = B1+ _ B2+ 0+ = ; (B1 B2 )+ = B1+ [ B2+
M + = f f; : !g:!2M = : 2 Sg :
. . (a? )+ = f : a ; a : g where a is a negative atom, M is a monopole and B1 ; B2 are bipoles.
7
Finally, for any bipole F we have
kF k = f 0: =
:
2F
+
^
0
^ g
Note that the conditions 0 and in the de nition of kF k capture, respectively, the structural rules of Weakening (Contraction is implicit in the fact that ; 0 are sets) and Entropy. Thus, in the focussing bipolar calculus, the structural rules never occur in the decomposition of the bipoles themselves, only at the beginning of such decompositions.
Example 1 Consider the following bipole F = p? (((a P (brc)) & (a0 r(b0 P ? c0 ))) q? (r? ! (d P e) (f rg))) |
{z
}
M1
| {z }
M2
| {z }
M3
Applying the de nitions above, we have
1; c0 : 1 a0 < b0 M1 = f : a k(b < c); c0 : a0 < b0 g M1+ = f 1 : 1 a k(b < c) 1 : 1 2 : de + M2 = f : d k eg (! M2 ) = f g 2 : M3 = f : f < gg M3+ = f 3 : 3 f < g g 3 : 3
g
where 1 ; 2 ; 3 range over sets, and 1 ; 3 over orders ( 2 = because of the exponential). Now (p? )+ consists of the axioms : p and p : , which we can write p00 : p0 where one of p0 ; p00 is p and the other is (we make the same convention for q and r). Hence, F + consists of the pre-inferences of the form 1 : 1 a k(b < c) 1; c0 : 1 a0 < b0 2 : d e 3 : 3 f < g 1 ; 2; 3 ; p00 ; q00 ; r00 : p0 k( 1 < q0 > < [z = u ][z = u ][z 2 jY j] 2 C ^ (u ; u ) 2 ! _ and [z = u ][Y (z ; z )] 2 C ^ u 2 j!j _ > > : [Y (z ; z ; z )] 2 C 1
2
3
1
1
2
1
1
2
1
1
1
2
1
2
2
3
2
3
3
2
3
1
3
1
1
2
2
3
3
2
1
3
? For each conclusion equation X b (Y1 ; : : : ; Yn ), CO1: [X (z1 ; z2 ; z3)] ; C ! [Yi (z1 ; z2; z3 )] ; if [z1 2 jYi j][z2 2 jYi j][z3 2 jYi j] 2 C
C
CO2:
[X (z1 ; z2 ; z3)] ; C ! [Yi (z1 ; z2)] ; C if [z1 2 jYi j][z2 2 jYi j][z3 2 jYj j] 2 C ^ i 6= j
CO3:
[X (z1 ; z2 ; z3)] ; C ! deadend if [z1 2 jYi j][z2 2 jYj j][z3 2 jYk j] 2 C
^ i 6= j 6= k 6= i ^ :(i; j; k)
Figure 3: Simpli cation rules: propagating order information
14
Lemma 1 If is a n-ary cyclic relation (where n = 2; 3), then Æ is an n-ary cyclic relation which is isomorphic to P where P = fu = [z = u]g. Proof | It is easy to show by induction that for a given consumer place z , there is at most one occurrence of an infon [z = u] in any state. Hence [z = u] de nes a mapping z 7! u from jÆ j into P \ jj. Rule PL ensures that, in any open state at the end of Simpli cation, such as Co, this mapping is in fact a bijection (otherwise, the state would have been expanded into a deadend). We have the following obvious consequence: Corollary 1 If ! is an order (resp. a variety) then so is !Æ (resp. Æ ). Furthermore !Æ = !Æ De nition 4 For each side variable Y , hence ranging over orders, we introduce another side variable Y_ , ranging over order varieties, and de ned by the equation Y_ = Y where is an arbitrary new place. Now, we extend Co (modulo cyclicity) with new infons involving these new side variables: [Y (z1 ; z2 ; z3)] ! [Y_ (z1 ; z2; z3 )] [Y (z1 ; z2 )] ! [Y_ (z1 ; z2 ; )]
Lemma 2 For any side variable Y , if the ternary relation Y_ Æ is an order variety then the binary relation Y Æ is an order, and Y Æ = (Y_ Æ ) Y Æ = Y_ Æ 6= Now, this holds only if Y_ Æ is an order variety. To ensure this, we add the following rewrite rules to the Simpli cation procedure.
OV1:
[Y_ (z1 ; z2 ; z3 )] ; [Y_ (z3 ; z2 ; z1 )]
OV2:
[Y_ (z; z1 ; z2)] ; [Y_ (z2 ; z3; z )]
OV3:
[Y_ (z1 ; z2 ; z3 )] ; [z 2 jY j]
! deadend ! [Y_ (z ; z ; z )] 1
2
3
! [Y_ (z; z ; z )] ! ! [Y_ (z ; z; z )] ! [Y_ (z ; z ; z )]
2
3
1 1
3
2
Note that these extra rules do not violate the essential properties stated by Theorem 3: Termination (since the number of infons they can possibly add is limited by the total number of triples on DY ), Preservation (since these rules simply express that Y_ should be an order variety in any solution) and niteness (since they involve only nite choices, with at most 3 alternatives). The direct consequence of these rules is: Lemma 3 For any side variable Y , the ternary relation Y_ Æ is an order variety. We can now show the following central theorem:
Theorem 4 The following properties hold:
For any conclusion equation X b (Y ; : : : ; Yn ) we have X Æ (Y Æ ; : : : ; Yn Æ ) For any premiss equation X = ! Y we have !Æ Y Æ X Æ Proof | Consider the rst property: let X b (Y ; : : : ; Yn ) be a conclusion equation. 1. Let's rst show that jYi Æ j\jYj Æ j = ; for i = 6 j . Indeed, if z 2 jYi Æ j\jYj Æ j then we would have [z 2 jYi j]^[z 2 jYj j] 1
1
1
and rule CL would have lead to a deadend. 2. Let z 2 jX Æj. Hence we have [z P 2 jX j], which must have been obtained by rule CS from [z 2 jYi j] for some i, hence z 2 jYi Æ j. Thus, jX Æj i jYi Æ j. The converse inclusion is shown in a similar way. 3. Let z1 ; z2 ; z3 2 jX Æ j. Hence we have z1 2 jYi Æ j ^ z2 2 jYj Æ j ^ z3 2 jYk Æ j for some i; j; k. Hence, we have [z1 2 jYi j] ^ [z2 2 jY j j] ^ [z3 2 jYk j]. If X Æ(z1 ; z2; z3 ), then [X (z1; z2 ; z3 )] and we have one of the following cases: i = j = k and, by application of rule CO1, we have [Yi (z1 ; z2 ; z3)] hence Yi Æ (z1 ; z2 ; z3 ). i = j 6= k and, by application of rule CO2, we have [Yi (z1 ; z2 )] hence Yi Æ (z1 ; z2 ). 15
i 6= j 6= k. Hence (i; j; k), otherwise rule CO3 would have applied, leading to a deadend, and state Co
would not have been produced. Thus, in all cases, we get (Y1 Æ ; : : : ; Yn Æ )(z1 ; z2 ; z3). Thus we have shown that X Æ (Y1 Æ ; : : : ; Yn Æ ).
We now come to the second property: let X = ! Y be a premiss equation. 1. Let's rst show that j!Æj \ jY Æ j = ;. Indeed, if z 2 j!Æj \ jY Æ j, then we would have [z = u] ^ [z 2 jY j] for u 2 j!j, which is impossible because rule PS sends z either onto ! or onto Y , but never both. 2. Let z 2 j!Æ j [ jY Æ j. Hence, we have [z = u] for some u 2 j!j, or we have [z 2 jY j], which, in both cases, must have been produced by rule PS from [z 2 jX j], hence z 2 jX Æ j. Thus, j!Æ j [ jY Æ j jX Æ j. The converse inclusion is shown in a similar way. 3. Let z1 ; z2 ; z3 2 jX Æ j such that (!Æ Y Æ )(z1 ; z2 ; z3 ). Hence we have one of the following cases:
[z
= u1 ] ^ [z2 = u2 ] ^ [z3 = u3 ] with u1 ; u2 ; u3 2 j!j and !Æ (z1 ; z2 ; z3 ). Hence, !(u1 ; u2 ; u3 ) and we are in the conditions of application of rule PO. [z1 = u1 ] ^ [z2 = u2 ] ^ [z3 2 jY j] with u1 ; u2 2 j!j and !Æ (z1 ; z2 ). Hence !(u1 ; u2) and we are in the conditions of application of rule PO. [z1 = u1 ] ^ [z2 2 jY j] ^ [z3 2 jY j] with u1 2 j!j and Y Æ (z2 ; z3 ). Hence [Y (z1 ; z2 )] and we are in the conditions of application of rule PO. [z1 2 jY j] ^ [z2 2 jY j] ^ [z3 2 jY j] with Y Æ (z1 ; z2; z3 ). Hence Y (z1 ; z2 ; z3 )Æ and we are in the conditions of application of rule PO. Thus, in all cases, by application of rule PO, we get (!Æ Y Æ )(z1 ; z2; z3 ). Thus, we have shown, !Æ Y Æ X Æ . 1
Theorem 4 ensures that Y = Y Æ is a solution on the side variables at state Co, from which it is easy to infer a solution on all the variables. Hence, we have the following corollary, which completes the proof of Theorem 3.
Corollary 2 Simpli cation produces no inconsistent open states. In fact, it is easy to see that Simpli cation produces the minimal solutions:
Lemma 4 An assignment of the side variables is a solution of the set of infons at state Co if and only if for all side variable Y we have Y Æ P (Y )DY .
Proof | Let be a solution of the set of infons at Co . Hence for any side variable Y , (Y ) sa s es all the infons in Co . With the support set infons, we conclude that DY j(Y )j. With the order infons, we conclude that, in DY , if Y Æ (z1 ; z2), then [Y (z1 ; z2 )] and hence (Y )(z1 ; z2 ). The same applies for the order varieties Y Æ and (Y ).
4.4 The Generation Procedure The Generation procedure starts from each open (and hence consistent) state at the end of the Simpli cation procedure and proceeds by applying the rewrite rules of Figure 4 and Figure 5. We now consider a given open node Co at the end of the Simpli cation procedure. It therefore de nes the sets DZ and the structures Z Æ for each variable Z . Using those, the Generation procedure produces additional infons of the following forms: [X DX ] ; [X DX = ] ; [Y DY = ] [X = ] ; [Y = ] The rules of Figure 4 propagate order infons downwards in the abstract proof tree and the rules of Figure 5 propagate support set infons upwards, ie., in both cases, in the opposite direction of the Simpli cation procedure (just as in the commutative or cyclic case).
Theorem 5 Generation satis es Resolution, Termination and Preservation, and produces no inconsistent state.
Proof | 16
? For each side variable Y , let fXj = !j Y gm j =1 be all the premiss equations involving Y ,
! ! ! [Y DY = ] ; [Y DY = ] ! with one output state for each such that Y Æ P and !j Æ j for all j = 1; : : : ; m. ? For each conclusion equation X b (Y ; : : : ; Yn ), CO? : [Yi DYi = i ]i n ! [X DX ( ; : : : ; n )] PO? :
[Xj DXj j ]m j =1
1
1
=1
? Initialisation: let C be the multiset of infons of the form [Y DY = z ] for each terminal premiss equation Y = z and [X DX 1] for each main variable X that occurs in no premiss equation. IO? : start ! C Figure 4: Generation rules: propagating order information
? For each side variable Y , let fXj = !j Y gm j =1 be all the premiss equations involving Y , PS? :
! [Xj = !j ]mj
[Y = ]
=1
? For each conclusion equation X b (Y1 ; : : : ; Yn ),
! ! ! [Y = 0 ] : : : [Yn = n0 ] ! with one output state for each 0 ; : : : ; n0 such that b ( 0 ; : : : ; n0 ) and i0 DYi = i for i = 1; : : : ; n. CS? :
[X = ][Yi DYi = i ]ni=1
1
1
1
1
? Initialisation: at the root node X IS? :
[X DX ]
! ! ! [X = 0 ] !
with one output state for each 0 such that 0 DX . Figure 5: Generation rules: propagating support set information
17
Termination and Resolution are obvious:
the propagations are always performed in a uniform direction in the abstract proof tree, which is nite, so they must terminate. Furthermore, at the end, there is an infon [Z = : : : ] for each variable Z , so the state is in solved form. Preservation: let's show it here for rule PO? . The other rules are treated in the same way.
{ Let C be an output state of rule PO? and a solution for that state. Since satis es the infon [Y DY = ] and !j Æ j , we have !j Æ (Y )DY j , and since also satis es the equation Xj = !j Y , we have (Xj )DXj j . Hence is a solution of the input state of the rule. { Let be a solution for the input state of rule PO? , and let = (Y )DY . Then by construction is a solution for the output state of the rule corresponding to that choice of . We just have to show that this choice is admissible, ie. that Y Æ P , which is a direct consequence of Lemma 4, and that !j Æ j for all j = 1; : : : ; m, which is also obvious. Indeed, satis es the equation Xj = !j Y , hence (Xj )DXj = !j Æ (Y )DY = !j Æ . But also satis es the infon [Xj DXj j ], hence !j Æ j . Generation produces no inconsistent state: it is suÆcient to show that consistency is preserved by each rewrite rule. Let's show it here for rule PO? . The other rules are treated in the same way. Let be a solution for the input state C of rule PO? , and let be such that Y Æ P and !j Æ j for all j = 1; : : : ; m. Now, let 0 be de ned for any side variable Y 0 as 0 (Y 0 ) = (Y 0 ) if [Y 0 DY 0 = 0 ] 2 C , and 0 (Y 0 ) = for Y 0 = Y , and 0 (Y 0 ) = Y 0 Æ otherwise (the values for the main variable are computed accordingly, using the equations X 0 = !0 Y 0 ). Now, 0 is precisely a solution for the output state of the rule which corresponds precisely to the choice of . Indeed, the infons and equations which do not involve (Xj )j=1;::: ;n nor Y are obviously satis ed by 0 , either because they are satis ed by or because of Theorem 4. The infons and equations that involve these variables are also satis ed by 0 , because of the conditions on the choice of .
5
Conclusion
In this paper, we have investigated the constraint based approach to proof construction in the framework of noncommutative logic. It has been shown that the method introduced for the commutative case in [3] extends quite straightforwardly to the cyclic case, but not to the whole of non-commutative logic. An alternative method, based on the explicit propagation of cyclic triples, has been introduced. This work shows that the mechanic that operates in the propagation relies in no way on the speci city of the structures of order and order varieties. The characteristic properties of these structures only imposes extra rewrite rules (OVi for i = 1; 2; 3), so that the generated information does constitute an instance of the structure (note that the choice of these extra rules is the most obvious, since it mimics exactly the de nition of order varieties; it may be, however, that they are not all needed). Other structures, like simple ternary cyclic relations could also be considered. References
[1] V. M. Abrusci and P. Ruet. Non-commutative logic I: the multiplicative fragment. Annals Pure Appl. Logic, 101(1):29{64, 2000. [2] J.-M. Andreoli. Logic programming with focusing proofs in linear logic. J. Logic and Comp., 2(3), 1992. [3] J.-M. Andreoli. Focussing and proof construction. Annals Pure Appl. Logic, 107(1-3):131{164, 2001. [4] J.-M. Andreoli, S. Freeman, and R. Pareschi. The coordination language facility: coordination of distributed objects. J. of Theory and Practice of Object Systems, 2(2):77{94, 1996. [5] C. Faggian. Proof construction and non-commutativity: a cluster calculus. In Proc. Int. Conf. on Principles and Practice of Declarative Programming. ACM Press, 2000. [6] J.-Y. Girard. Locus solum. Preprint Institut de Mathematiques de Luminy, 2000. [7] A. Joyal. Une theorie combinatoire des series formelles. Adv. Math., 42:1{82, 1981. 18
[8] R. Maieli and P. Ruet. Non-commutative logic III: focusing proofs. Technical Report 2000-14, Institut de Mathematiques de Luminy, 2000. Available as http://iml.univ-mrs.fr/ruet/PAPIERS/iml-0400.ps.gz. [9] D. Miller. A multiple-conclusion meta-logic. Theoret. Comp. Sci., 165:201{232, 1996. [10] D. Miller, G. Nadathur, F. Pfenning, and A. Scedrov. Uniform proofs as a foundation for logic programming. Annals Pure Appl. Logic, 51:125{157, 1991. [11] R. Mohring. Computationally tractable classes of ordered sets. ASI Series 222. NATO, 1989. [12] J. Polakow and F. Pfenning. Natural deduction for intuitionistic non-commutative linear logic. In Proc. TLCA'99, Springer LNCS 1581, 1999. [13] J. Polakow and F. Pfenning. Ordered linear logic programming. Preprint CMU CS-98-183, 1999. [14] P. Ruet. Non-commutative logic II: sequent calculus and phase semantics. Math. Struct. in Comp. Sci., 10(2):277{312, 2000. [15] D.N. Yetter. Quantales and (non-commutative) linear logic. J. Symb. Logic, 55(1), 1990.
19
Identity: A? is a positive atom.
` : A j A?
Positive rules: !; are orders on positive formulas and atoms.
`:!jA `: jB `:!A `: B
`:1 1 ` : ( k !) j A B ` : (! > ) j A B `:!jA `:!jB (no rule for 0) `` :: !AA ! `:!jAB `:!jAB 1
Negative rules
2
` : ! (A < B ) ` : ! (A k B ) ` : ! ? ` ; A : ! r P ` :!? ` : ! ?A ` : ! ArB `:!APB ` : !A ` : !B & `:!> > `:!A&B
?
Structural rules:
` ; A : ! A absorption ` : entropy `: ` ; A : ! Focussing rule: F positive and j!j contains no compound negative formulas. `:!jF `: !F Unfocussing rule: F is not positive.
`: !F `:!jF Table 1: NL sequent calculus A
Focussing sequent calculus for NL
We recall here the Focussing sequent calculus of non-commutative logic. A focussing sequent is of one of the two following forms:
` : , where is a set formulas (implicitly pre xed with the why-mot exponential), and is a series-parallel order variety of occurrences of formulas;
` : !jF
where is a set of formulas (implicitly pre xed with the why-mot exponential), ! is series-parallel order on occurrences of positive formulas and atoms, and F is a single formula called the focus.
Intuitively, the meaning of the bar j corresponds to the gluing of orders, so ! j F stands for ! F , but the occurrence F has been syntactically distinguished. Since we can focus on any formula of an order variety, the bar marks a xed (positive) focus and keeps track of its subformulas along the positive steps of the proof. The inferences of the focussing sequent calculus are presented in Table 1. Note that when we build a proof, moving upwards (bottom-up), series-parallelism is preserved. The Focussing sequent calculus presented above and the Focussing Bipolar sequent calculus introduced in Section 3.2 are isomorphic. This is a straightforward extension of the corresponding result shown for linear logic in [3]. If A is a set of negative atoms, an A-formula (resp. A-sequent, A-proof, etc.) is a formula (resp. sequent, proof, etc.) built only from the atoms of A and their duals. Theorem 6 Let A A0 be sets of (negative) atoms and a bijective mapping from the A-formulas into A0 whose restriction to A itself is the identity. Any A-proof in the Focussing sequent system of non-commutative logic is isomorphic to an A0 -proof in the Focussing Bipolar sequent calculus. The isomorphism is justi ed by the fact that in a proof of the Focussing sequent calculus, the \critical" sections corresponding to the recursive decomposition of connectives of the same polarity can be grouped together and, by renaming using , can be isomorphically replaced by a single inference of the Focussing Bipolar sequent calculus. 20
i? l? g? h? ?b
1
a?
r
F:
m? n? e?5
2
P
d?
1
?
4
?b 2
a?
P
3
c?
f?
!
7! B :
r
F?
P 3 c? ! f ?
5
1? B1 :
g? h?
2?
1
i? l?
B2 :
3?
2
B3 :
d?
4 ?
P
4?
3
m? n?
5? e?5
B4 :
4
B5 :
Figure 6: The multiple layers of a formula, and the resulting bipoles This is illustrated below. We informally de ne a mapping from positive A-formulas into A0 -bipoles. Consider for instance the formula F in Figure 6. The successive layers of connectives of identical polarities have been drawn on the gure. Starting from the root, each dashed line shows the border of a positive layer and each dotted line the border of a negative layer (the exponentials have a special status in that the subformula of an exponential ends a layer whatever its polarity | this corresponds to the polarity inverter that is hidden in exponentials []). Let G be the formula obtained by replacing in F each subformula located on the rst dotted line by its image by , which is a negative atom in A0 . In the example, there are three such subformulas F1 ; F2 ; F3 with topmost connectives 1 , 2 and 3 (the indices are only mentionned for reference purpose). It is easy to see that, by construction, the formula F = F? G is an A0 -bipole. The set of bipoles F for any positive formula F is called the universal program for . Now the bipolarisation of any A-proof in the Focussing sequent calculus is a proof in the Focussing Bipolar sequent calculus, involving only A0 -atoms and bipoles from the universal program for .
Example 3 Here is an example of mapping from a proof in the Focussing sequent calculus of non-commutative logic into a proof in the Focussing Bipolar sequent calculus. ` d d? ` e e? ? ? ` dd e ` e d? e? ` d e d? e? ` a? a ` d? e? < a? a d e ` d ? e ? k a? a d e
& ( & ) ( & ) P ` a (d & e) (d? e? ) P a? ` a (d & e) c? ((d? e?) P a? )
&
7!
` a 3
B3
` d 4
(
B4
` 1 3 k 4 ` 1 2
)
` e 4
B4 B1
B2
where the bipoles used in the resulting proof are given by B1 = (a (d & e)) = 1? (a (d & e)) B2 = (c? (d? P a? )) = 2? (c? (3 P 4 )) B3 = (a? ) = 3? a? B4 = (d? e? ) = 4? (d? e? )
A straightforward, practical consequence of Theorem 6 is that proof construction in non-commutative logic is equivalent to proof construction in the focussing bipolar sequent calculus.
21