A permutation-free sequent calculus for intuitionistic logic. Roy Dyckhoff$ & Luis Pinto† School of Mathematical & Computational Sciences, St Andrews University, St Andrews, Scotland {rd, luis}@dcs.st-and.ac.uk Abstract. We describe a sequent calculus MJ , based on work of Herbelin, of which the cutfree derivations are in 1-1 correspondence with the normal natural deduction proofs of intuitionistic logic. MJ (without cut) has the sub-formula property and is therefore convenient for automated proof search; it admits no permutations and therefore avoids some of the backtracking problems in LJ. We present a simple new proof of Herbelin’s strong cut elimination theorem for this calculus.
1. Introduction For proof search in intuitionistic logic, the cut-free sequent calculus LJ of Gentzen and its variants (e.g. G3 in [25]) have the advantage over natural deduction systems of being syntax-directed, but the (well-known) disadvantages of contractions and [24] permutations. For deciding the provability of a formula, other variants of LJ (such as [9] the first author’s LJT) allow, in the zero-order case, avoidance of contraction; also, the discovery of one proof of a formula allows one to abandon search for others, thus reducing (when successful) the unwanted effects of permutations: but for proof search, i.e. an attempt to enumerate the set of proofs (in the natural deduction sense), one must, on finding a proof, backtrack to find alternative proofs while avoiding the rediscovery of the same proof. The calculus LJ and the above-mentioned variants are, because of the permutations, unsuitable for this purpose. Herbelin [20, 21] has described a suitable calculus (called there LJT, a name we avoid in case of confusion with that in [9]), but without discussing its suitability for proof search. Our purpose here is to describe and justify a modified version of this calculus. We call it MJ because it is intermediate between LJ and N J. First we describe the cut-free fragment: then we show that the extension with appropriate cut rules and cut reduction rules admits a strong cut elimination theorem (as already proved by Herbelin). The novel contribution of this report lies in a simple proof of this theorem, using Dershowitz’ theorem [8] on recursive path orderings. In [12] we show how MJ can be used as a basis for further analysis of the permutations of LJ. Implementation of MJ is routine and gives an attractive method of searching for proofs, because its derivations are in 1–1 correspondence with the dp-normal natural deduction proofs of NJ. Since cut-free LJ allows permutations and so suffers from the defect that several (but interpermutable [12, 29]) derivations may determine one normal natural deduction, we describe MJ as a permutation-free calculus. The implementation issues will be discussed elsewhere [10]. $ †
Both authors are supported by the European Commission via the ESPRIT Basic Research Action 7232 “GENTZEN”; the second author is supported by the JNICT (Portugal). New address: Departamento de Matemática, Universidade do Minho, Braga, Portugal
2. Herbelin’s calculus MJ in the implicational case Consider first the standard description of normal terms of the untyped lambda calculus: A ::= ap(A, N) | var(V) N ::= λ V.N | an(A) where V is the set of variables, N is the set of normal terms and A is the set of normal nonabstraction terms. Variable binding conventions are, as usual, that, in λ V.N , λ V binds free occurrences of V in N . We use explicit constructors var and an to ensure consistency with our type-checked implementations. Later, we allow constructors for the connectives other than implication. The head variable of such a term is (for a large term) buried deep inside: Herbelin’s representation brings it to the surface. So, following [20, 21], we make the following: Definition 1. The set M of untyped deduction terms and the set Ms of “lists” of such terms are defined simultaneously as follows: M ::= (V ; Ms) | λ V. M Ms ::= [] | M:: Ms We use again the same symbol λ where we should really use another symbol. The notation [ M ] abbreviates the term M::[], [ M1 , M2 ] abbreviates M1::[ M2 ], and so on. When we add other connectives, Ms will no longer be a list. Terms are equal iff they are alpha-convertible. Variable binding conventions are as before. Closed terms are those with no free variable occurrences. Adding type restrictions in the usual way gives us a description of the typable terms in contexts Γ , where contexts associate types to variables. We use the judgment form Γ ⇒ M:P (read as “ M is a term of type P in the context Γ ”) and the judgment form Γ → Ms:Q (read as “ Ms is a term-list based on P of type Q in the context Γ ”). The P idea here is that Ms is a list of terms representing minor premisses of implication elimination rule instances on the (introduction-free) branch from the head formula P to Q ; and the assumptions on which these minor premisses depend are all in Γ . The schematic rules for deriving judgments of this kind are (in the implicational fragment) as follows: the Abstract rule has an implicit side-condition about “newness” of the variable.
Γ , x:P P→ Ms : R Select Γ , x:P ⇒ (x; Ms) : R Γ , x:P ⇒ M : Q Abstract Γ ⇒ λ x. M : P ⊃ Q
Meet
Γ P→[] : P Γ ⇒ M : P Γ → Ms : R Q Split Γ P → (M:: Ms) : R ⊃Q Page 2
There is a bijective translation between M and N , mentioned but not detailed in [20, 21]: briefly, (x;[M1 ,..., Mn ]) translates into the normal term ap(...ap(x, M1 ),... Mn ), usually written as xM1 ... Mn , and abstraction terms translate in the obvious way. Formally, θ : M → N and ψ : N → M may be defined as follows:
θ: M→N θ (x; Ms) = def θ ′(var(x), Ms)
ψ:N→M ψ (an(A)) = def ψ ′(A,[])
θ (λ x. M) = def λ x.(θ M)
ψ (λ x.N) = def λ x. ψN ψ ′ : A × Ms → M ψ ′(var(x), Ms) = def (x; Ms)
θ ′ : A × Ms → N θ ′(A,[]) = def an(A)
ψ ′(ap(A, N), Ms) = def ψ ′(A, ψN:: Ms)
θ ′(A, M:: Ms) = def θ ′(ap(A, θ M), Ms)
(i) ψ o θ = idM : M → M . (ii) ψ o θ ′ = ψ ′ : A × Ms → M . Proof. By simultaneous structural induction on the argument and the second argument respectively. See section 3. QED. Proposition 1.
(i) θ o ψ = idN : N → N . (ii) θ o ψ ′ = θ ′ : A × Ms → N . Proof. By simultaneous structural induction on the argument and the first argument respectively. See section 3. QED.
Proposition 2.
It follows that M and N are in 1-1 correspondence. We must however check that the correspondences θ and ψ work well at the typed level. Here is an appropriate proof system for the typed version NJ of N :
Γ > A:P
x:P, Γ > var(x):P
Γ >> an(A):P
ax.
Γ > A:P ⊃ Q Γ >> N:P ⊃E Γ > ap(A, N):Q
x:P, Γ >> N:Q ⊃I Γ >> λ x.N:P ⊃ Q
A proof of a formula P is given by a closed term N so that [] >> N:P is derivable in this calculus. The schematic rules of the typed version MJ of M were given above. A proof (in MJ) of a formula P is given by a closed term M so that [] ⇒ M:P is derivable in MJ. Proposition 3. (i) Proof.
The following rules are admissible:
Γ⇒M: R Γ >> θ M : R
Γ > A:P Γ → Ms : R (ii)
P
Γ >> θ ′(A, Ms) : R
By simultaneous structural induction on M and Ms. See section 4. QED.
Page 3
Proposition 4. (i) Proof.
The following rules are admissible:
Γ >> N : R Γ ⇒ ψN : R
Γ > A:P Γ → Ms:R (ii)
P
Γ ⇒ ψ ′(A, Ms) : R
By simultaneous structural induction on N and A . See section 4. QED.
Via the Curry-Howard correspondence, we then have a representation of the normal natural deductions of intuitionistic implicational logic. Of course, there are well-known problems [26] of ensuring that this correspondence is 1–1; so we use assumption classes in natural deductions, and then the correspondence is (for closed terms and assumption-free proofs) 1–1 modulo alpha-convertibility (more generally, it is 1–1 modulo choice of variable names). The next two sections give full details, including the extensions to the other intuitionistic connectives. Note that our syntax for normal terms ensures that the permutation-reductions of Prawitz [32, 33] are not applicable: i.e. no instance of disjunction elimination occurs at the root of the major premiss of an elimination rule, and similarly for existential elimination and absurdity elimination. Normality has to be carefully defined; in the terminology of [37], normal deductions are “dp-normal”, i.e. they admit no detour- or absurdity-reductions or permutation-reductions but they may allow some immediate simplifications.
3. The correspondence between MJ and NJ (untyped case) Given variable sets U (for individuals) and V (for proofs), with individual terms T built as usual from variables in U and function symbols, the special lambda terms M and the normal lambda terms N are defined as follows: M ::= (V ; Ms) | λ V. M | inl(M) | inr(M) | pair(M, M) | λ U. M | pairq(T, M) Ms ::= [] | M:: Ms | when(V. M, V. M) | p(Ms) | q(Ms) | ae | apq(T, Ms) | spl(U.V. M) N ::= an(A) | λ V.N | i(N) | j(N) | wn(A,V.N ,V.N) | pr(N , N) | efq(A) | λ U.N | prq(T, N) | ee(A,U.V.N) A ::= var(V) | ap(A, N) | fst(A) | snd(A) | apn(A,T)
This definition of “normal lambda terms” is designed, as noted above, to capture exactly the “dp-normal terms” of [37]. For example, the choice of efq as a constructor of N terms rather than of A terms ensures that no conclusion of an absurdity-elimination rule can be the major premiss of an elimination, since the first argument of each constructor ( wn, efq, ee, ap, fst, snd, apn) for an elimination step is always an A term. Similarly for the permutation reductions and the detour reductions. Conversely, any dp-normal term can be captured by the above grammar for N , by insertion of occurrences of the an and var constructors and routine changes of terminology. In one direction, we define θ : M → N as follows:
Page 4
θ ′ : A × Ms → N θ ′(A,[]) = def an(A)
θ: M→N θ (x; Ms) = def θ ′(var(x), Ms) θ (λ x. M) = def λ x.(θ M)
θ ′(A, M:: Ms) = def θ ′(ap(A, θ M), Ms) θ ′(A, when(x. M, y. M ′ )) = def wn(A, x. θ M, y. θ M ′ )
θ (inl(M)) = def i(θ M)
θ ′(A, p(Ms)) = def θ ′( fst(A), Ms)
θ (inr(M)) = def j(θ M)
θ ′(A, q(Ms)) = def θ ′(snd(A), Ms)
θ (pair(M, M ′ )) = def pr(θ M, θ M ′ )
θ ′(A, ae) = def efq(A)
θ (λ u. M) = def λ u.(θ M) θ (pairq(T, M)) = def prq(T, θ M)
θ ′(A, apq(T, Ms)) = def θ ′(apn(A,T), Ms) θ ′(A, spl(u.x. M)) = def ee(A, u.x. θ M)
and, in the other direction, its inverse ψ : N → M as follows:
ψ:N→M ψ (an(A)) = def ψ ′(A,[]) ψ (λ x.N) = def λ x. ψN ψ (i(N)) = def inl( ψN) ψ ( j(N)) = def inr( ψN) ψ (wn(A, x.N , y. N ′ )) = def ψ ′(A, when(x. ψN , y. ψN ′ )) ψ (efq(A)) = def ψ ′(A, ae) ψ (pr(N , N ′ )) = def pair( ψN , ψN ′ ) ψ (λ u.N) = def λ u. ψ (N) ψ (prq(T, N)) = def pairq(T, ψN) ψ (ee(A, u.x.N)) = def ψ ′(A, spl(u.x. ψN)) ψ ′ : A × Ms → M ψ ′(var(x), Ms) = def (x; Ms) ψ ′(ap(A, N), Ms) = def ψ ′(A, ψ (N):: Ms) ψ ′( fst(A), Ms) = def ψ ′(A, p(Ms)) ψ ′(snd(A), Ms) = def ψ ′(A, q(Ms)) ψ ′(apn(A,T), Ms) = def ψ ′(A, apq(T, Ms)) (i) For all M: M , ψ (θ M) = M . (ii) For all Ms: Ms and A: A , ψ (θ ′(A, Ms)) = ψ ′(A, Ms). By simultaneous structural induction on M and Ms.
Proposition 1. Proof.
Case M = (x; Ms) ψ (θ (x; Ms)) = ψ (θ ′(var(x), Ms)) = ψ ′(var(x), Ms) = (x; Ms)
by definition of θ (rule 1) by (ii) (induction) by definition of ψ ′ (rule 1) Page 5
Case M = λ x. M ′ ψ (θ (λ x. M ′ )) = ψ (λ x. θ M ′ ) by definition of θ (rule 2) = λ x. ψ (θ M ′ ) by definition of ψ (rule 2) = λ x. M ′ by (i) (induction) Case M = λ u. M ′ (similar) Case M = inl( M ′ ) ψ (θ (inl( M ′ ))) = ψ (i(θ M ′ )) by definition of θ (rule 3) M )) = inl( ψ (θ ′ by definition of ψ (rule 3) = inl( M ′ ) by (i) (induction) Cases M = inr( M ′ ), M = pair( M ′ , M ′′ ) and M = pairq(T, M ′ ) (similar). Case Ms = [] ψ (θ ′(A,[])) = ψ (an(A)) = ψ ′(A,[]) Case Ms = M:: Mss ψ (θ ′(A, M:: Mss)) = ψ (θ ′(ap(A, θ M), Mss)) = ψ ′(ap(A, θ M), Mss) = ψ ′(A, ψ (θ M):: Mss) = ψ ′(A, M:: Mss) Case Ms = apq(T, Mss) Case Ms = when(x. M ′ , y. M ′′ ) ψ (θ ′(A, when(x. M ′ , y. M ′′ ))) = ψ (wn(A, x. θ M ′ , y. θ M ′′ )) = ψ ′(A, when(x. ψ (θ M ′ ), y. ψ (θ M ′′ ))) = ψ ′(A, when(x. M ′ , y. M ′′ )) Case Ms = p(Mss) ψ (θ ′(A, p(Mss))) = ψ (θ ′( fst(A), Mss)) = ψ ′( fst(A), Mss) = ψ ′(A, p(Mss)) Case Ms = q(Mss) Case Ms = ae ψ (θ ′(A, ae)) = ψ (efq(A)) = ψ ′(A, ae) Case Ms = spl(u.x. M) ψ (θ ′(A, spl(u.x. M))) = ψ (ee(A, u.x. θ M)) = ψ ′(A, spl(u.x. ψ (θ M))) = ψ ′(A, spl(u.x. M))
by definition of θ ′ (rule 1) by definition of ψ ′ (rule1) by definition of θ ′ (rule 2) by (ii) (induction) by definition of ψ ′ (rule 2) by (i) (induction) (similar)
by definition of θ ′ (rule 3) by definition of ψ (rule 5) by (i) (induction) (twice)
by definition of θ ′ (rule 4) by (ii) (induction) by definition of ψ ′ (rule 3) (similar)
by definition of θ ′ (rule 6) by definition of ψ (rule 6)
by definition of θ ′ (rule 7) by definition of ψ (rule 9) by (ii) (induction)
QED.
Page 6
(i) For all N:N , θ ( ψ N) = N . ii) For all A: A and Ms: Ms, θ ( ψ ′(A, Ms)) = θ ′(A, Ms). By simultaneous structural induction on N and A .
Proposition 2 Proof.
Case N = an(A) θ ( ψ (an(A))) = θ ( ψ ′(A,[])) = θ ′([], A) = an(A) Case N = λ x. N ′ θ ( ψ (λ x. N ′ )) = θ (λ x. ψ N ′ ) = λ x. θ ( ψ N ′ ) = λ x. N ′ Case N = λ u. N ′ Cases N = i( N ′ ), j( N ′′ ), pr( N ′ , N ′′ ) and pr(T, N ′ ) Case N = wn(A, x. N ′ , y. N ′′ ) θ ( ψ (wn(A, x. N ′ , y. N ′′ ))) = θ ( ψ ′(A, when(x. ψ N ′ , y. ψ N ′′ ))) = θ ′(A, when(x. ψ N ′ , y. ψ N ′′ )) = wn(A, x. θ ( ψ N ′ ), y. θ ( ψ N ′′ )) = wn(A, x. N ′ , y. N ′′ ) Case N = efq(A) θ ( ψ (efq(A))) = θ ( ψ ′(A, ae)) = θ ′(A, ae) = efq(A) Case N = ee(A, u.x. N ′ ) θ ( ψ (ee(A, u.x. N ′ ))) = θ ( ψ ′(A, spl(u.x. ψ N ′ ))) = θ ′(A, spl(u.x. ψ N ′ )) = ee(A, u.x. θ ( ψ N ′ )) = ee(A, u.x. N ′ ) Case A = var(x) θ ( ψ ′(var(x), Ms)) = θ ((x; Ms)) = θ ′(var(x), Ms) Case A = ap( A′ , N) θ ( ψ ′(ap( A′ , N), Ms)) = θ ( ψ ′( A′ , ψ (N):: Ms) = θ ′( ψ (N):: Ms, A′ ) = θ ′(ap( A′ , θ ( ψ N)), Ms) = θ ′(ap( A′ , N), Ms) Case A = apn( A′ ,T) Case A = fst( A′ )
by definition of ψ (rule 1) by (ii) (induction) by definition of θ ′ (rule 1) by definition of ψ (rule 2) by definition of θ (rule 2) by (i) (induction) (similar) (similar)
by definition of ψ (rule 5) by (ii) (induction) by definition of θ ′ (rule 3) by (i) (induction), twice
by definition of ψ (rule 6) by (ii) (induction) by definition of θ ′ (rule 6) by definition of ψ (rule 9) by (ii) (induction) by definition of θ ′ (rule 7) by (i) (induction)
by definition of ψ ′ (rule 1) by definition of θ (rule 1)
by definition of ψ ′ (rule 2) by (iii) (induction) by definition of θ ′ (rule 2) by (i) (induction) (similar)
Page 7
θ ( ψ ′( fst( A′ ), Ms)) = θ ( ψ ′( A′ , p(Ms))) = θ ′( A′ , p(Ms)) = θ ′( fst( A′ ), Ms) Case A = snd( A′ ) QED.
by definition of ψ ′ (rule 3) by (ii) (induction) by definition of θ ′ (rule 4) (similar)
4. The isomorphism (typed case) We show here the completeness (soundness and adequacy) of the system MJ w.r.t. NJ, i.e. that derivable judgments of the form Γ ⇒ M:P translate to derivable judgments Γ >> θ M : P and vice-versa. First, we give the full version of the typed system NJ. The formulae/types P,Q, R, … are based on the usual first order atomic formulae, quantifiers ∀, ∃, the constant ⊥ and the infix operators ⊃, ∧, ∨. There are two judgment forms, Γ >> N:P and Γ > A:P , since we have the two syntactic categories N and A. Contexts Γ are finite functions from variables V to formulae. If ∀P is a formula, where P is an abstraction u.Q , and t is a term, then P(t) is the formula obtained by substituting t for the variable u in Q The rules are
Γ>A: P
min
Γ >> an(A) : P x:P, Γ >> N : Q ⊃I Γ >> λ x.N : P ⊃ Q Γ >> N : P
Γ >> i(N) : P ∨ Q Γ >> N : Q
Γ >> j(N) : P ∨ Q
∨ I1
x:P, Γ > var(x) : P
∨ I2
Γ > A : P ∨ Q x:P, Γ >> N : R y:Q, Γ >> N ′ : R ∨E Γ >> wn(A, x.N , y. N ′) : R Γ>A: ⊥
Γ >> efq(A) : P
⊥E
ax.
Γ > A : P ⊃ Q Γ >> N : P ⊃E Γ > ap(A, N) : Q Γ > A : P∧Q ∧E Γ > fst(A) : P 1 Γ > A : P∧Q ∧E Γ > snd(A) : Q 2
Γ >> N : P Γ >> N ′ : Q ∧I Γ >> pr(N , N ′) : P ∧ Q
Γ > A : ∀P
Γ >> N : P(u) ∀I Γ >> λ u.N : ∀P Γ >> N : P(t) ∃I Γ >> prq(t, N) : ∃P
Γ > apn(A,t) : P(t)
∀E
Γ > A : ∃P x:P(u), Γ >> N : R ∃E Γ >> ee(A, u.x.N) : R A proof of a formula P is given by a closed term N so that [] >> N:P is derivable in this calculus.
Page 8
Second, here are the rules of the typed system MJ, again with two judgment forms Γ ⇒ M : P and Γ P→ Ms : Q Meet
Γ , x:P P→ Ms : R Select Γ , x:P ⇒ (x; Ms) : R Γ , x:P ⇒ M : Q Abstract Γ ⇒ λ x. M : P ⊃ Q Γ⇒M: P ∨R Γ ⇒ inl(M) : P ∨ Q 1 Γ⇒M: Q ∨R Γ ⇒ inr(M) : P ∨ Q 2 Γ ⇒ M : P Γ ⇒ M′ : Q ∧R Γ ⇒ pair(M, M ′) : P ∧ Q Γ ⇒ M : P(u) ∀R Γ ⇒ λ u. M : ∀P Γ ⇒ M : P(t) ∃R Γ ⇒ pairq(t, M) : ∃P
Γ P→[] : P Γ ⇒ M : P Γ → Ms : R Q Split Γ P⊃Q → (M:: Ms) : R Γ , x:P ⇒ M : R Γ , y:Q ⇒ M ′ : R ∨S Γ P∨Q → when(x. M, y. M ′ ) : R Γ P→ Ms : R
Γ P∧Q → p(Ms) : R Γ → Ms : R Q
Γ → q(Ms) : R
∧ S1 ∧ S2
P∧Q
Γ ⊥ → ae : R
⊥S
Γ P(t) → Ms : R
Γ ∀P → apq(t, Ms) : R Γ , x:P(u) ⇒ M : R
∀S
Γ ∃P → spl(u.x. M) : R
∃S
A proof in MJ of a formula P is given by a closed term M such that [] ⇒ M:P is derivable in this calculus. Proposition 3 (Soundness Theorem). (i) Proof.
Γ⇒M: R Γ >> θ M : R
The following rules are admissible: ii)
Γ > A : P Γ P→ Ms : R Γ >> θ ′(A, Ms) : R
By simultaneous induction on the structures of M and Ms resp.
Case M = (x; Ms).
Then the premiss is the conclusion of a Select rule, so for some P x:P is in Γ and Γ P → Ms : R. By the a x rule of N J, Γ > var(x) : P . By (ii) (induction), Γ >> θ ′(var(x), Ms) : R , i.e. Γ >> θ (x; Ms):R.
The 6 remaining cases for M are easy. Case Ms = [].
The second premiss must be the conclusion of a Meet rule with P = R. Then θ ′(A, Ms) = an(A), so the first premiss of (ii) justifies the conclusion, using the min rule of NJ.
Page 9
Case Ms = (M:: Mss).
The second premiss must be the conclusion of a Split rule, with P of the form P ′ ⊃ Q′ , with premisses Γ ⇒ M : P ′ and Γ Q ′→ Mss : R . We can now build the proof
Γ ⇒ M : P′ (i) Γ > A:P Γ >> θ M : P ′ ⊃ E Γ > ap(A, θ M) : Q′ Γ Q ′→ Mss : R Γ >> θ ′(ap(A, θ M), Mss) : R
(ii)
whence, using the definition of θ ′ , we conclude that Γ >> θ ′(A, M:: Mss) : R. Case Ms = when(x. M, y. M ′ ). The second premiss must be the conclusion of an ∨S rule, with P of the form P ′ ∨ Q′ , with premisses Γ , x: P ′ ⇒ M : R and Γ , y: Q′ ⇒ M ′ : R. So the proof is
Γ , x: P ′ ⇒ M : R (i) Γ , y: Q′ ⇒ M ′ : R (i) Γ > A:P Γ , x: P ′ >> θ M : R Γ , y: Q′ >> θ M ′ : R ∨E Γ >> wn(A, x. θ M, y. θ M ′):R whence we conclude that Γ >> θ ′(A, when(x. M, y. M ′ )) : R . Case Ms = p(Mss).
The second premiss must be the conclusion of an ∧S rule, with P of the form P ′ ∧ Q′ , with premiss Γ → Mss : R So the P′
proof is
Γ>A : P ∧E Γ → Mss : R Γ > fst(A) : P ′ P′
Γ >> θ ′( fst(A), Mss) : R
(ii)
whence we conclude that Γ >> θ ′(A, p(Mss)):R. Case Ms = q(Mss).
Similar.
Case Ms = ae.
Easy.
Case Ms = apq(t, Mss).
The second premiss must be the conclusion of an ∀L rule, with P of the form ∀P ′ , with premiss Γ → Mss : R and t a P ′(t)
term. So the proof is
Γ>A: P ∧E Γ > apn(A,t) : P ′(t) Γ → Mss : R P ′(t)
Γ >> θ ′(apn(A,t), Mss) : R whence we conclude that Γ >> θ ′(A, apq(t, Mss)) : R . Page 10
(ii)
Case Ms = spl(u.x. M).
The second premiss must be the conclusion of an ∃Lrule, with P of the form ∃P ′ , with premiss Γ , x: P ′(u) ⇒ M : R . So the proof is
Γ , x: P ′(u) ⇒ M : R (i) Γ > A : P Γ , x: P ′(u) >> θ (M) : R ∃E Γ >> ee(A, u.x. θ M) : R whence we conclude that Γ >> θ ′(A, spl(u.x. M)):R. Proposition 4 (Adequacy Theorem). (i) Proof.
Γ >> N : R Γ ⇒ ψN : R
QED.
The following rules are admissible:
Γ > A : P Γ → Ms : R P
(ii)
Γ ⇒ ψ ′(A, Ms) : R
By simultaneous induction on the structures of N and A respectively.
Case N = an(A).
So the premiss is the conclusion of a min rule from Γ > A:R, So the proof is
Γ > A : R Γ →[] : R
Meet
R
Γ ⇒ ψ ′(A,[]):R
(ii)
whence we conclude that Γ ⇒ ψ (an(A)) : R Case N = λ x. N ′ .
So the premiss is the conclusion of an ⊃ I rule, with R of the form P ⊃ Q , from x:P, Γ >> N ′:Q . So the proof is x:P, Γ >> N ′ : Q (i) x:P, Γ ⇒ ψ N ′ : Q ⊃I Γ ⇒ λ x. ψN ′ : P ⊃ Q whence we conclude that Γ ⇒ λ x. ψ ( N ′ ):R
Cases N = i( N ′ ), N = j( N ′ ), N = pr( N ′ , N ′′ ), N = λ u. N ′ and N = pr(t, N ′ ). Similar. Case N = wn(A, x. N ′ , y. N ′′ ).
By (ii) (induction) and two uses of (i) (induction),
Γ > A:P ∨ Q
Γ , y:Q >> N ′′:R Γ , x:P >> N ′:R (i) (i) Γ , x:P ⇒ ψN ′:R Γ , y:Q ⇒ ψN ′′:R ∨S Γ → when(x. ψN ′ , y. ψN ′′):R P∨Q
Γ ⇒ ψ ′(A, when(x. ψN ′ , y. ψN ′′)):R
(ii)
But ψ ′(A, when(x. ψ N ′ , y. ψ N ′′ )) = ψ (wn(A, x. N ′ , y. N ′′ )) = ψ N .
Page 11
Case N = efq(A) .
Easy.
Case N = ee(A, u.x. N ′ ).
By (i) (induction) and (ii) (induction),
Γ , x:P(u) >> N ′:R (i) Γ , x:P(u) ⇒ ψN ′:R ∃S Γ > A:∃P Γ → spl(u.x. ψN ′) : R ∃P
(ii)
Γ ⇒ ψ ′(A, spl(u.x. ψN ′)) : R But ψ ′(A, spl(u.x. ψ N ′ )) = ψ (ee(u, x. N ′ )) = ψ N . Case A = var(x).
So x:P is in Γ , whence Γ P → Ms : R yields (by the Select axiom) Γ ⇒ (x; Ms) : R. But ψ ′(var(x), Ms) = (x; Ms).
Case A = ap( A′ , N).
So Γ > A′: P ′ ⊃ P and Γ >> N: P ′ . So the proof is:
Γ >> N: P ′ (i) Γ ⇒ ψN: P ′ Γ → Ms:R P
Γ > A′: P ′ ⊃ P
Γ →(ψN :: Ms) : R
⊃S
P ′ ⊃P
(ii)
Γ ⇒ ψ ′( A′ , ψN:: Ms) : R whence we conclude that Γ ⇒ ψ ′(ap( A′ , N), Ms) : R . Case A = fst( A′ )
So Γ > A′:P ∧ Q. So the proof is:
Γ → Ms : R P
Γ > A′:P ∧ Q Γ → p( Ms) : R
∧ S1
P∧Q
(ii)
Γ ⇒ ψ ′( A′ , p(Ms)) : R whence we conclude that Γ ⇒ ψ ′( fst( A′ ), Ms) : R . Case A = snd( A′ ) is similar. Case A = apn( A′ ,t).
So Γ > A′:∀P ′ and t is a term with P of the form P ′(t) So the proof is:
Γ → Ms : R P ′(t)
Γ > A′:∀P ′ Γ → apq(t, Ms) : R ∀P ′
Γ ⇒ ψ ′( A′ , apq(t, Ms)) : R
(ii)
whence we conclude that Γ ⇒ ψ ′(apn( A′ ,t), Ms):R.
Page 12
QED.
5. Admissibility of cut rules From now on we consider the syntax of MJ extended by allowing constructors for terms representing derivations using a cut rule. Since there are two kinds of sequent, there are several (in fact, four) cut rules. For convenience in proving the admissibility of the cut rules, the constructors cuti have an extra argument, the cut formula. The context-free syntax is given by adding the productions M ::= cut3 (P, M, Ms) | cut4 (P, M, M) Ms ::= cut1 (P, Ms, Ms) | cut2 (P, M, Ms) Theorem (Coquand) [20, 21]
The following rules are admissible in cut-free MJ:
Γ → P Γ P → R Q Cut1 Γ → R Q Γ ⇒ P Γ , P →R Q Cut2 Γ → R Q
Γ ⇒ P Γ P→ R Cut3 Γ⇒ R Γ ⇒ P Γ, P ⇒ R Cut4 Γ⇒ R
Proof. For completeness (and in view of the importance of the result) we give three proofs: the first is, we surmise, the one mentioned but not detailed in [20, 21] and there attributed to Coquand; the second (in section 6) is essentially that of [20, 21]; the third (in section 7) is new. With proof-term annotations these rules are
Γ → Ms : P Γ P→ Mss : R Q Γ → cut1 (P, Ms, Mss) : R Q
Γ ⇒ M : P Γ , x:P → Ms : R Q Γ → cut2 (P, M, x. Ms) : R Q
Γ ⇒ M : P Γ → Ms:R
Cut1
P
Γ ⇒ cut3 (P, M, Ms) : R
Cut3
Γ ⇒ M : P Γ , x:P ⇒ M ′:R Cut4 Γ ⇒ cut4 (P, M, x. M ′) : R
Cut2
Here is the full set of cut-reduction rules [20, 21] (we omit the quantifier rules, which add no extra difficulties). The rules are written as equations. They are referred to later as cut 1([]), cut 1(::), ..., cut 4(≠), ... . cut1 (P,[], Mss) = Mss cut1 (P, M:: Ms, Mss) = M::cut1 (P, Ms, Mss) cut1 (P, when(x. M, y. M ′ ), Mss) = when(x.cut3 (P, M, Mss), y.cut3 (P, M, Mss)) cut1 (P, p(Ms), Mss) = p(cut1 (P, Ms, Mss)) cut1 (P, q(Ms), Mss) = q(cut1 (P, Ms, Mss))
Page 13
cut2 (P, M, x.[]) = [] cut2 (P, M, x.( M ′:: Ms)) = cut4 (P, M, x. M ′ )::cut2 (P, M, x. Ms) cut2 (P, M, x.when(y. M ′ , z. M ′′ )) = when(y.cut4 (P, M, M ′ ), z.cut4 (P, M, M ′′ )) cut2 (P, M, x.p(Ms)) = p(cut2 (P, M, x. Ms)) cut2 (P, M, x.q(Ms)) = q(cut2 (P, M, x. Ms)) cut3 (P,(x; Ms), Mss) = (x; cut1 (P, Ms, Mss)) cut3 (S ⊃ T, λ y. M,[]) = λ y. M cut3 (S ⊃ T, λ y. M, M ′:: Ms) = cut3 (T, cut4 (S, M ′ , y. M), Ms) cut3 (S ∨ T,inl(M),[]) = inl(M) cut3 (S ∨ T,inl(M), when(y. M ′ , z. M ′′ )) = cut4 (S, M, y. M ′ ) cut3 (S ∨ T,inr(M),[]) = inr(M) cut3 (S ∨ T,inr(M), when(y. M ′ , z. M ′′ )) = cut4 (S, M, z. M ′′ ) cut3 (S ∧ T, pair(M, M ′ ),[]) = pair(M, M ′ ) cut3 (S ∧ T, pair(M, M ′ ), p(Ms)) = cut3 (S, M, Ms) cut3 (S ∧ T, pair(M, M ′ ), q(Ms)) = cut3 (S, M ′ , Ms) cut4 (P, M, x.(y; Ms)) = (y; cut2 (P, M, x. Ms)) (y ≠ x) cut4 (P, M, x.(x; Ms)) = cut3 (P, M, cut2 (P, M, x. Ms)) cut4 (P, M, x.(λ y. M ′ )) = λ y.cut4 (P, M, x. M ′ ) cut4 (P, M, x.inl( M ′ )) = inl(cut4 (P, M, x. M ′ )) cut4 (P, M, x.inr( M ′ )) = inr(cut4 (P, M, x. M ′ )) cut4 (P, M, x.pair( M ′ , M ′′ )) = pair(cut4 (P, M, x. M ′ ), cut4 (P, M, x. M ′′ )) The rules are illustrated (only in the case of implication) in the Appendix. A simple cut instance is an instance of cut with cut-free premisses. The weight of a simple cut instance is the quadruple (size of cut formula, type of cut, height of right premiss, height of left premiss). Weights are ordered lexicographically, with the types ordered by Cut1 = Cut3 < Cut2 = Cut4. The weak cut reduction strategy is to reduce an arbitrary simple cut instance, recursively reducing any simple cut instances in the premisses of the result. By induction on the weight of simple cut instances, this strategy terminates. More formally, we define the function * from simple cut instances to cut-free derivations as follows (we show only the implicational case):
Page 14
cut1 (P,[], Mss)* = Mss cut1 (P, M:: Ms, Mss)* = M::cut1 (P, Ms, Mss) * cut2 (P, M, x.[])* = [] cut2 (P, M, x.( M ′:: Ms))* = cut4 (P, M, x. M ′ ) * :: cut2 (P, M, x. Ms) * cut3 (P,(x; Ms), Mss)* = (x; cut1 (P, Ms, Mss)*) cut3 (P, λ y. M,[])* = λ y. M cut3 (S ⊃ T, λ y. M, M ′:: Ms) = cut3 (T, cut4 (S, M ′ , y. M)*, Ms) * cut4 (P, M, x.(y; Ms))* = (y; cut2 (P, M, x. Ms)*) (y ≠ x) cut4 (P, M, x.(x; Ms))* = cut3 (P, M, cut2 (P, M, x. Ms)*) * cut4 (P, M, x.(λ y. M ′ ))* = λ y.(cut4 (P, M, x. M ′ )*) and it is easily seen that any application of * on the RHS is to a cut instance of lower weight than that on the LHS. Note the crucial use of the ordering on types of cut, used for example in the penultimate equation (which copes, roughly, with the permutation of cut with contraction) to ensure that the cut3 instance on the RHS is of lower weight than the cut4 instance on the LHS. This completes the first proof, that the above quadruple of cut rules is admissible, i.e. that the cut reduction rules are weakly terminating. (It is easily seen that any simple cut instance matches the LHS of at least one rule.) QED.
6. Strong Normalisation More generally, one may show, following [20, 21], the following: Strong Normalisation Theorem (Herbelin). The above set of rules is strongly normalising. Proof. (We give here our own version of Herbelin’s proof, with some simplifications but following essentially the same pattern, originally due to Tait.) Definition. Let k∈N. (From now on, N is the type of natural numbers.) M is SN(k) iff every reduction sequence starting at M has length (the number of reduction steps) ≤ k. M is SN iff M is SN(k) for some k. The least such k is called the SN-height of M. Similar definitions are made for Ms. We shall write M ~~> M' to mean that M rewrites in one step to M' using one of the above rules (which were presented as equations). Proposition 5. (i) M is SN(k) iff whenever M ~~> M', M' is SN(k-1). (ii) Ms is SN(k) iff whenever Ms ~~> Ms', Ms' is SN(k-1). Proof. Routine. QED.
Page 15
Proposition 6. When M is SN, the reduction tree of M is finite. Proof. By König's lemma. QED. Lemma 1.
Proof.
(i) [] is SN(k) for every k ≥ 0. (ii) (M :: Ms) is SN(k) iff M is SN(k) and Ms is SN(k). (iii) (x;Ms) is SN(k) iff Ms is SN(k). (iv) λx.M is SN(k) iff M is SN(k). Routine. QED.
The next four definitions present, for convenience, various implications as rules: these are rules in our metalanguage rather than the object language, i.e. they are not formal judgments of (an extension of) MJ. Definition. Let Γ → Ms : P. Then Ms is SN1 iff for all Mss, Q
Γ P→ Mss : R Mss is SN cut1 (P, Ms, Mss) is SN Definition. Let Γ ⇒ M : P . Then M is SN2 iff for all Ms,
Γ , x:P → Ms : R Ms is SN Q cut2 (P, M, x. Ms) is SN Definition. Let Γ ⇒ M : P . Then M is SN3 iff for all Ms,
Γ P→ Ms : R Ms is SN cut3 (P, M, Ms) is SN Definition. Let Γ ⇒ M : P . Then M is SN4 iff for all M',
Γ , x:P ⇒ M ′ : R M ′ is SN cut4 (P, M, x. M ′ ) is SN Lemma 2. SNi implies SN, for i = 1, 2, 3 and 4. Proof. Suppose Ms is SN1. Since [] is SN, so is cut1(P,Ms,[]) and so it is, say, SN(k). Then easily Ms is SN(k). Similarly for i = 2, 3 and 4, because also (x;[]) is SN. QED. Notation.
When M is SNi and also SN(k), we say that M is SNi(k). Similarly for Ms.
(i) Ms is SN1 and Ms ~~> Ms* imply Ms* is SN1. (ii) M is SNi and M ~~> M* imply M* is SNi ( i = 2, 3 or 4). Proof. (i) Let Ms be SN1 and Ms ~~> Ms*. Suppose Mss is SN; consider a reduction sequence R == cut1(P, Ms*, Mss) ~~> .... . Prefix this with the reduction (of the Ms argument) from cut1(P,Ms,Mss). Since Ms is SN1, cut1(P,Ms,Mss) is SN. So this reduction sequence is finite: so R is finite, hence Ms* is SN1. (ii) Similarly. QED. Lemma 3.
Page 16
The crux of the proof of strong normalisation is in the next two lemmas; lemma 5 will show that whenever M is SN it is also SN3, (and so also SN 2 and SN4; and similarly if Ms is SN then it is SN1) thus showing that all the Cut rules preserve the SN property, whence all derivations (with Cut) are SN. Lemma 4. Proof. (i) (ii)
(i) M is SN3 implies M is SN4. (ii) M is SN3 implies M is SN2. More precisely, we have to show that Let j∈N, k∈N, h∈N, let Γ ⇒ M : P , let M be SN3(j), let Γ , x:P ⇒ M ′ : R, let M' be SN(k) and height(M') < h. Then cut4 (P, M, x. M ′ ) is SN. Let j∈N, k∈N, h∈N, let Γ ⇒ M : P , M be SN3(j), let Γ , x:P → Ms : R, let Ms Q be SN(k) and height(Ms) < h. Then cut2 (P, M, x. Ms) is SN.
The proof is by simultaneous induction on j, then on k, then on h. We refer to appeals to an induction hypothesis by “induction (i)” and “induction (ii)” as appropriate. (i) Assume that Γ ⇒ M : P and M is SN3. Take M' s.t. Γ , x:P ⇒ M ′ : R and M' is SN(k) and height(M') < h. Suppose that cut4 (P, M, x. M ′ ) ~~> M''. We’ll show that M'' is SN (and so therefore, because M'' was arbitrary, cut4 (P, M, x. M ′ ) is SN). There are the following cases: •
the reduction just involves one of the indicated occurrences of M and M'. There are two cases: •
In the first case, we have M ~~> M* and the reduction is cut4(P,M,x.M') ~~> cut4(P,M*,x.M') == M''. Since M is SN3 (j), M* is (by lemma 3) SN3 (j-1). By induction (i), cut4(P,M*,x.M') is SN, i.e. M'' is SN.
•
In the second case, we have M' ~~> M* and the reduction is cut4(P,M,x.M') ~~> cut4(P,M,x.M*) == M''. Since M' is SN(k), M* is SN(k-1). By induction (i), cut4(P,M,x.M*) is SN, i.e. M'' is SN.
•
M' is λy.M''' (so M''' is SN(k)) and the reduction is, by rule cut4(λ), cut4 (P, M, x. λ y. M ′′′ ) ~~> λ y.cut4 (P, M, x. M ′′′ )
== M''.
By induction (i) (note that height(M''') < h-1), cut4 (P, M, x. M ′′′ ) is SN and so (by lemma 1) is λ y.cut4 (P, M, x. M ′′′ ), i.e. M'' is SN.
Page 17
•
M' is (x;Ms) (so Ms is SN(k)) and the reduction is, by rule cut4(=), cut4 (P, M, x.(x; Ms)) ~~> cut3 (P, M, cut2 (P, M, x. Ms)) == M''. By induction (ii), (note that height(Ms) < h-1), cut2 (P, M, x. Ms) is SN. Since M is SN3, M'' is SN.
•
M' is (y;Ms), where y ≠ x, (so Ms is SN) and the reduction is, by rule cut4(≠), cut4 (P, M, x.(y; Ms)) ~~> (y; cut2 (P, M, x. Ms))
== M''.
By induction (ii) (note that height(Ms) < h-1), cut2 (P, M, x. Ms) is SN and so (by lemma 1) is (y; cut2 (P, M, x. Ms)), i.e. M'' is SN. Assume that Γ ⇒ M : P and M is SN3 . Take Ms s.t. Γ , x:P → Ms : R and Ms is Q SN(k) and height(Ms) Ms''. We’ll show that Ms'' is SN (and so therefore, because Ms'' was arbitrary, cut2 (P, M, x. Ms) is SN). (ii)
There are the following cases: •
the reduction just involves one of the indicated occurrences of M and Ms. There are two cases: •
In the first case, we have M ~~> M* and the reduction is cut2(P,M,x.Ms) ~~> cut2(P,M*,x.Ms) == Ms''. Since M is SN3 (j), M* is (by lemma 3(ii)) SN3 (j-1). By induction (ii), cut2(P, M*, x.Ms) is SN, i.e. Ms'' is SN.
•
In the second case, we have Ms ~~> Ms* and the reduction is cut2(P,M,x.Ms) ~~> cut2(P,M,x.Ms*) == Ms''. Since Ms is SN(k), Ms* is SN(k-1) and so, by induction (ii), cut2(P,M,x.Ms*) is SN, i.e. Ms'' is SN.
•
Ms is [] and then Ms'' == []: trivially SN.
•
Ms is (M'::Ms') (so M', Ms' are SN(j-1)) and the reduction (by rule cut2(::)) is cut2 (P, M, x.( M ′:: Ms′ )) ~~> cut4 (P, M, x. M ′ )::cut2 (P, M, x. Ms′ ) == Ms''. By induction (i) and (ii) respectively the two components of Ms'' are SN, then (by lemma 1) Ms'' is SN. QED.
Page 18
Lemma 5. Proof.
(i) M is SN implies M is SN3. (ii) Ms is SN implies Ms is SN1. More formally, we have to show
(i)
Let P be a formula, M be SN(k) of type P, Ms' be SN(k'); then cut3(P,M,Ms) is SN.
(ii)
Let P be a formula, Ms be SN(k) of type P, Mss be SN(k'); then cut1(P, Ms, Mss) is SN.
The proof is by simultaneous induction on the size of P, then k, then the height of the term M (resp. Ms), then k'. (i) Let M be SN(k) and Ms be SN(k'). Consider any reduction cut3(P,M,Ms) ~~> M'; we have to show that M' is SN. There are the following cases: •
•
the reduction just involves one of the indicated occurrences of M and Ms. •
Suppose M ~~> M* with M' == cut3(P,M*,Ms). So M* is SN(k-1). By induction (i), M* is SN3. So M' is SN.
•
Suppose instead that Ms ~~> Ms* with M' == cut3(P,M,Ms*). Then Ms* is SN(k'-1). By induction (i), M' is SN.
M is (x;Mss) and the reduction is cut3(P,(x;Mss),Ms) ~~> (x;cut1(P,Mss,Ms)) == M'. By lemma 1, Mss is SN(k); but height((x;Mss)) < height(Mss). By induction (ii), Mss is SN1. So cut1(P,Mss,Ms) is SN; by lemma 1, M' is SN.
•
M is λy.M* and Ms is [] and the reduction is cut3(P,λy.M*,[]) ~~> λy.M* == M'. Since M is SN, so is M'.
•
M is λy.M* and Ms is (M''::Mss) and P is S ⊃ T and the reduction is cut3( S ⊃ T ,λy.M*,M''::Ms) ~~> cut3(T,cut4(S,M'',y.M*),Mss) == M'. By lemma 1, M'' is SN; by induction (i), noting that the type S of M'' is smaller than P, M'' is SN3; by lemma 3, M'' is SN4, so cut4(S,M'',y.M*) is SN. Also, Mss is SN. Since T is smaller than P, M' is, by induction (i), SN.
(ii) Let Ms be SN(k) and Mss be SN(k'). Consider any reduction cut 1(P,Ms,Mss) ~~> Ms'; we have to show that Ms' is SN. There are the following cases: •
the reduction just involves one of the indicated occurrences of Ms and Mss.
Page 19
•
Suppose Ms ~~> Ms* with Ms' == cut1(P,Ms*,Mss). So Ms* is SN(k-1). By induction (ii), Ms* is SN1. So Ms' is SN.
•
Suppose Mss ~~> Mss* with Ms' == cut1 (P,Ms,Mss*). So Mss* is SN(k'-1). By induction (ii), Ms' is SN.
•
Ms is [] and the reduction is cut1 (P,[],Mss) ~~> Mss == Ms'; Mss is SN and so therefore is Ms'.
•
Ms is M::Ms* and the reduction is cut1(P,M::Ms*,Mss) ~~> M::cut1(P,Ms*,Mss) == Ms'. By lemma 1, M and Ms* are SN(i) and SN(j) respectively with k = max(i,j) and height(Ms*) < height(Ms); by induction (ii), cut1(P,Ms,Mss) is SN, so Ms' is SN. QED.
Theorem. Every derivation term is SN. Proof. Lemma 1 shows that the axioms and rules of MJ preserve SN; lemmas 4 and 5 show that the four Cut rules also preserve SN. For example, consider the Cut4 rule: … … Γ ⇒ M : P Γ , x:P ⇒ M ′:R Cut 4 Γ ⇒ cut4 (P, M, x. M ′) : R and suppose M and M' are SN. By lemma 5(i), M is SN3; by lemma 4(i) M is SN4; by the definition of SN4, since M' is SN, cut4(P,M,x.M') is also SN. So all derivation terms are SN.
QED.
7. Strong normalisation via rewriting The second proof of strong normalisation (and thus the third proof of admissibility of cut) depends on the recursive path-ordering (r.p.o.) theorem of Dershowitz [8]. Let > be a transitive and irreflexive ordering on a set F of operators, and T(F) be the set of terms over F. “f ≥ g” abbreviates “f > g or f = g” as usual. Then >rpo is defined recursively on T(F) by s = f(s1, …, sm) >rpo g(t1, …, tn ) = t iff si ≥ rpo t for some i = 1, …, m or f > g and s >rpo tj for all j = 1, …, n or f = g and {s1, …, sm} >> rpo {t1, …, tn }
Page 20
where >>rpo is the extension of >rpo to finite multisets and ≥rpo means >rpo or equivalent up to permutations of subterms. The r.p.o. theorem says that if > is well-founded, then so is >rpo. We treat the term cut1(P,Ms,Mss) as if made up of an operator cut1(P) and two arguments Ms and Mss; similarly for the other cut terms. The operators are then ordered according to the following rules: cut i(P) > cutj(Q) if P > Q, for i, j = 1, 2, 3 and 4. cut 4(P), cut2(P) > cut3(P), cut1(P) For the non-cut terms, we have the operators ';', 'λ ', '::' and '[]', which can just be ordered equally but below each of the cuti(P) operators. The formulae P can be ordered by the subterm relation. We thus have an ordered set Op of operators Op = { cuti(P) : i = 1, 2, 3 or 4, P a formula } ∪ { ';', 'λ', '::', '[]' } Proposition 7. The ordering > on Op is transitive, irreflexive and well-founded. Proof. Transitivity follows by examination of cases. Irreflexivity is trivial. The only possibility of an infinite decreasing sequence is of the form cuti 0 (P0 ) > cuti1 (P1 ) >… whose length must be bounded by twice the size of P 0, since each reduction either reduces the argument P or both fixes P and reduces the suffix of the cut from 4 or 2 to 3 or 1. QED. It follows from the r.p.o. theorem that >rpo on the set of terms of MJ (extended with the cut terms) is well-founded. Theorem. The set of cut-reduction rules of MJ is strongly terminating. Proof. We must check for each cut-reduction rule that the LHS >rpo RHS. Here we check just two of the rules to illustrate the technique: cut 3(S⊃T,λy.M, M'::Ms) >rpo cut3(T, cut4(S,M',y.M), Ms) because cut3(S⊃T) > cut3(T) because S⊃T > T and cut 3(S⊃T,λy.M, M'::Ms) >rpo cut4(S,M', y.M) because cut3(S⊃T) > cut4(S) because S⊃T > S and cut3(S⊃T,λy.M, M'::Ms) >rpo M' because M':: Ms ≥rpo Ms and cut3(S⊃T,λy.M, M'::Ms) >rpo M because λy.M ≥rpo M and cut 3(S⊃T,λy.M, M'::Ms) >rpo Ms. because M'::Ms ≥rpo Ms Page 21
cut 4(P,M, x.(x;Ms)) >rpo cut3(P,M,cut2(P,M, x.Ms)) because cut4(P) > cut3(P) and cut4(P,M, x.(x;Ms)) >rpo M because M ≥rpo M and cut4(P,M, x.(x;Ms)) >rpo cut2(P,M, x.Ms) because cut4(P) = cut2(P) and {M, (x;Ms)} >>rpo {M, Ms} because (x;Ms) >rpo Ms. There is a minor problem: cut4(P) and cut 2(P) are not necessarily equal; indeed no order between them can be inferred from the above definition of >. So one must work instead not with the operators as given but with equivalence classes generated by the conditions that cut 4(P) = cut2(P) and cut3(P) = cut1(P). This causes no additional difficulties. QED.
8. Confluence Confluence of these rewriting rules follows (as in [20, 21]) from the strong normalisation and the absence of critical pairs.
9. Related work As well as in Herbelin’s papers [20, 21], the same idea (of emphasising the head variable of an application term) appears in Brock’s work [3] on compile-time pointer reversal and approximately in [22] (where Howard attributes the idea to Curry). Howard’s treatment works well just for the implicational fragment; when other connectives are added, the treatment of implication needs modification. Herbelin’s [20, 21] attributes the idea to Danos et al [67] and to Girard [17]; also it appears in [18], where Girard suggests “the creation of an improvement LI of the familiar intuitionistic sequent calculus LJ” along these lines. In formal treatments (e.g. [28, 30] of logic programming [27]), calculi with a formula in a distinguished place are well known; Pfenning [30] makes explicit (in ELF) the bijection between proofs in his system (organised for proof search) and “canonical” natural deduction proofs. The ideas are discussed also in our [11]. We have tried here to present the details in a traditional notation (rather than in ELF) and in greater generality. Pfenning’s calculus, where the sequents with a stoup formula are of the form Γ u→ A >> P (“A immediately entails P”), with P being just an atomic formula, is incomplete outside the hereditary Harrop setting: the rules for disjunction and existential quantification in the stoup are different. Further work is needed to clarify the relationship with the intercalation calculi of Sieg [35] and Cittadini [5]. Dershowitz [8], Okada [unpublished, see [4]] and Cichon et al [4] have drawn attention to the applicability of term rewriting techniques (going back to Gentzen [15]) in proofs of cut elimination; [36] includes similar work.
Page 22
10. Conclusion and future work We have given a version of part of Herbelin’s calculus [20, 21] and shown it to be an accurate representation of normal natural deduction proofs. Whereas Herbelin describes it (when explicit substitution is included) as a “lambda-calculus isomorphic to sequent calculus structure”, we prefer to think of it as a “sequent calculus isomorphic to the structure of natural deduction”. We use this elsewhere [12] to show (as in [39], where the CUT rule is allowed; see also [31] and [38]) that the equivalence classes in (a variant of) LJ that are generated by the standard interpretation of cut-free LJ derivations as normal natural deductions are precisely those generated by certain basic permutations. We believe it can also be used to prove permutation-invariant properties of sequent derivations, as in [19] and to give simpler inductive proofs about normal natural deductions, such as the theorem in [5]. Proof search in MJ avoids the inefficiencies associated with the permutations in LJ; but it does not avoid the problems of contraction. It remains to be seen whether one can avoid permutations and contractions simultaneously. Comparison with the ideas in [34] for reducing the effect of the propositional permutations on the first-order proof search will be investigated, as will the possibility of a similar system for fragments of linear logic. The ideas about focusing proofs [2] or canonical (or normal) proofs [14] should be amenable to a similar treatment, i.e. presentation as a calculus with several judgment forms rather than as a calculus with a lot of constraints. For example, the notion of a normal proof in definition 6.1 of [14] contains several constraints such as “ ∆ is the conclusion of a c ? rule and the preceding inference is an inference of type ? introducing one of the active formulas of the inference ending with ⇒ ∆ ”. There are clear advantages to our approach in terms of implementation. Some implementation of MJ is described in [10].
11. Acknowledgments Special thanks are due to Andrew Adams, Simon Brock, Adam Cichon, Hugo Herbelin, Grisha Mints and Frank Pfenning for making [1], [3], [4], [20], [29] and [30] available before publication. Many others, notably Henk Barendregt, Thierry Coquand, Philippe de Groot, Sachio Hirokawa, Dale Miller, Andrei Scedrov, Peter Schroeder-Heister and Nataraj Shankar, have made helpful comments and suggestions.
Page 23
12. References [1] [2]
Adams, A. A.: “Meta-theory of sequent calculus and natural deduction systems in Coq”, in preparation (St Andrews University). Andreoli, J.-M.: “Logic programming with focusing proofs in linear logic”, Journal of Logic and Computation, 2 (1992), 297–347.
[3]
Brock, S.: “Compile-time pointer reversal”, from
[email protected].
[4]
Cichon, E. A., M. Rusinowitch and S. Selhab: “Cut elimination and rewriting: termination proofs”, in preparation (preprint received in June 1996), INRIA-Lorraine, Nancy, France.
[5]
Cittadini, S.: “Intercalation calculus for intuitionistic propositional logic”, Report 29 in Philosophy, Methodology, Logic Series, Carnegie Mellon University (1992)..
[6]
Cubric, D.: “Interpolation property for bicartesian closed categories”, Arch. Math. Logic 33 (1994), 291–319. Danos, V., J.B.Joinet and H. Schellinx: “LKQ and LKT: Sequent calculi for second order logic based upon dual linear decompositions of classical implication”, in “Advances in Linear Logic” (Proceedings of the Cornell Workshop on Linear Logic, edited by J.Y-Girard, Y. Lafont and L. Regnier), Cambridge University Press (1995), pp 211–224.
[7]
[8] [9] [10] [11]
[12]
[13] [14] [15] [16] [17]
Dershowitz, N.: “Orderings for term-rewriting systems”, Theoretical Computer Science 17 (1982), 279–301. Dyckhoff, R.: “Contraction-free sequent calculi for intuitionistic logic”, Journal of Symbolic Logic 57 (1992), 795–807. Dyckhoff, R. and J. Howe: “Proof search in a permutation-free intuitionistic sequent calculus”, in preparation, St Andrews University, (1996). Dyckhoff, R. & L. Pinto: “Uniform proofs and natural deductions”, in Proceedings of CADE-12 workshop on “Proof search in type theoretic languages” (edited by D. Galmiche & L. Wallen), Nancy, (June 1994). Dyckhoff, R. & L. Pinto: “Permutability of proofs in intuitionistic sequent calculi,” in preparation; abstract appeared in proceedings of 10th International Congress on Logic, Methdology and Philosophy of Science, held at Florence (1995). Gallier, J.: “Constructive logics. Part I: a tutorial on proof systems and typed lambda-calculi”, Theoretical Computer Science 110 (1993), 249–339. Galmiche, D. & G. Perrier: “On proof normalization in linear logic”, Theoretical Computer Science 135 (1994), 67–110. Gentzen, G.: “The collected papers of Gerhard Gentzen”, (M. Szabo, editor), North-Holland, Amsterdam (1969). Girard, J.-Y., Y. Lafont & P. Taylor: “Proofs and types”, Cambridge University Press (1989). Girard, J.-Y.: “On the unity of logic”, Annals of Pureand Applied Logic 59 (1993), 201–217.
[18]
Girard, J.-Y.,: “A new constructive logic: classical logic”, Mathematical Structures in Computer Science 1 (1991), 255–196.
[19]
Girard, J.-Y., A. Scedrov & P. Scott : “Normal forms and cut-free proofs as natural transformations”, in “Logic from Computer Science”, MSRI publications, Y. Moschovakis (editor), 21, Springer-Verlag, (1992), 217–241.
[20]
Herbelin, H.: “A λ-calculus structure isomorphic to sequent calculus structure”, preprint, (October 1994); now available at "http://capella.ibp.fr/~herbelin/LAMBDA-BAR-FULL.dvi.gz". Herbelin, H.: “A λ-calculus structure isomorphic to Gentzen-style sequent calculus structure”, Proceedings of the 1994 conference on Computer Science Logic, Kazimierz (Poland), (edited by L. Pacholski & J. Tiuryn), Springer Lecture Notes in Computer Science 933 (1995), 61–75. Howard, W.A.: “The formulae-as-types notion of construction”, in To H.B.Curry, Essays on Combinatory Logic, Lambda Calculus and Formalism, (edited by J.R. Hindley & J.P. Seldin), Academic Press (1980). Kapur, D. (editor): “Automated deduction”, CADE-11 (proceedings) Saratoga Springs 1992, Lecture Notes in Artificial Intelligence 607, Springer-Verlag (1992).
[21]
[22]
[23]
Page 24
[24]
[34]
Kleene, S. C.: “Permutability of inferences in Gentzen’s calculi LK and LJ”, Mem. Amer. Math. Soc. (1952), 1–26. Kleene, S. C.: “Introduction to metamathematics”, Walters-Noordhoff & North-Holland (1991). Leivant, D.: “Assumption classes in natural deduction”, Zeitschrift für math. Logik 25 (1979), 1–4. Miller, D., G. Nadathur, F. Pfenning & A. Scedrov : “Uniform proofs as a foundation for logic programming”, Annals of Pure and Applied Logic 51 (1991), 125–157. Miller, D.: “Forum: a multiple-conclusion specification logic”, to appear in Theoretical Computer Science. Mints, G.: “Cut-elimination and normal forms of sequent derivations”, Report CSLI-94-193, November (1994), Stanford University. Pfenning, F.: “Notes on deductive systems”, Carnegie Mellon University, (1994). Pottinger, G.: “Normalization as a homomorphic image of cut-elimination”, Annals of Mathematical Logic 12 (1977), 323–357. Prawitz, D.: “Natural deduction”, Almquist & Wiksell, Stockholm (1965). Prawitz, D.: “Ideas and results in proof theory”, Second Scandinavian Logic Symposium (ed. Fenstad), North-Holland (1970), 235–308. Shankar, N.: “Proof search in the intuitionistic sequent calculus”, in [23], 522–536.
[35]
Sieg, W.: “Mechanisms and Search”, AILA Preprint, (1992).
[36]
Tahhan-Bittar, E.: “Gentzen cut elimination for propositional sequent calculus by rewriting derivations”, Pub. du Laboratoire de Logique, d’Algorithmique et d’Informatique de Clermont 1, Université d’Auvergne, 16 (1992).
[37]
Troelstra, A. S., and D. van Dalen: “Constructivism in mathematics: an introduction (vol 2)”, North Holland, (1988). Ungar, A. M.: “Normalization, cut-elimination and the theory of proofs”, CSLI lecture notes 28, University of Chicago Press (1992). Zucker, J.: “The correspondence between cut-elimination and normalization”, Annals of Mathematical Logic 7 (1974), 1–112.
[25] [26] [27] [28] [29] [30] [31] [32] [33]
[38] [39]
Page 25
Appendix (illustrating the implicational rules for cut reduction) cut1 (P,[], Mss) = Mss
… Γ P→ Mss : R
Γ P→ [] : P Γ P→ cut1 (P,[], Mss) : R
a
… Γ P→ Mss : R
cut1 (P, M:: Ms, Mss) = M::cut1 (P, Ms, Mss) … Γ⇒M: S
… Γ T → Ms : P
… Γ P→ Mss : R
Γ S⊃T → (M:: Ms) : P Γ S⊃T → cut1 (P, M:: Ms, Mss) : R a … Γ⇒M: S
… Γ T → Ms : P
… Γ P→ Mss : R
Γ T → cut1 (P, Ms, Mss) : R
Γ S⊃T → M::cut1 (P, Ms, Mss) : R cut2 (P, M, x.[]) = [] … Γ⇒M: P
Γ , x:P → [] : Q Q
Γ → cut2 (P, M, x.[]) : Q Q
a
Γ → [] : Q Q
cut2 (P, M, x.( M ′:: Ms)) = cut4 (P, M, x. M ′ )::cut2 (P, M, x. Ms) … … Γ , x:P ⇒ M ′ : S Γ , x:P … T → Ms : R
Γ⇒M: P Γ , x:P S⊃T →( M ′:: Ms) : R Γ S⊃T → cut2 (P, M, x.( M ′:: Ms)) : R a
… … Γ ⇒ M : P Γ , x:P ⇒ M ′ : S Γ ⇒ cut4 (P, M, x. M ′) : S
… Γ⇒M: P
… Γ , x:P T → Ms : R
Γ T → cut2 (P, M, x. Ms) : R
Γ S⊃T → cut4 (P, M, x. M ′ )::cut2 (P, M, x. Ms) : R
Page 26
cut3 (P,(x; Ms), Mss) = (x; cut1 (P, Ms, Mss)) … Γ → Ms : P Q Γ ⇒ (x; Ms) : P
… Γ → Mss : R P
Γ ⇒ cut3 (P,(x; Ms), Mss) : R a … … Γ → Ms : P Γ → Mss : R Q P Γ → cut1 (P, Ms, Mss) : R Q Γ ⇒ (x; cut1 (P, Ms, Mss)) : R cut3 (S ⊃ T, λ y. M,[]) = λ y. M … Γ ⇒ λ y. M : S ⊃ T
Γ →[]:S ⊃ T
S⊃T Γ ⇒ cut3 (S ⊃ T, λ y. M,[]) : S ⊃ T
a
… Γ ⇒ λ y. M : S ⊃ T
cut3 (S ⊃ T, λ y. M, M ′:: Ms) = cut3 (T, cut4 (S, M ′ , y. M), Ms) … … … Γ ⇒ M ′ : S Γ → Ms : R Γ , y:S ⇒ M : T T Γ ⇒ λ y. M : S ⊃ T Γ → ( M ′:: Ms) : R S⊃T Γ ⇒ cut3 (S ⊃ T, λ y. M,( M ′:: Ms)) : R a … … Γ ⇒ M ′ : S Γ , y:S ⇒ M : T … Γ ⇒ cut4 (S, M ′ , y. M) : T Γ → Ms : R T Γ ⇒ cut3 (T, cut4 (S, M ′ , y. M), Ms) : R
Page 27
cut4 (P, M, x.(y; Ms)) = (y; cut2 (P, M, x. Ms)) (y ≠ x)
… Γ , x:P → Ms : R … Q Γ ⇒ M : P Γ , x:P ⇒ (y; Ms):R Γ ⇒ cut4 (P, M, x.(y; Ms)) : R a … … Γ ⇒ M : P Γ , x:P → Ms : R Q Γ → cut2 (P, M, x. Ms) : R Q Γ ⇒ (y; cut2 (P, M, x. Ms) : R
cut4 (P, M, x.(x; Ms)) = cut3 (P, M, cut2 (P, M, x. Ms))
… Γ , x:P → Ms : R … P Γ ⇒ M : P Γ , x:P ⇒ (x; Ms):R Γ ⇒ cut4 (P, M, x.(x; Ms)) : R a … … Γ ⇒ M : P Γ , x:P → Ms : R … P Γ → cut2 (P, M, x. Ms) : R Γ⇒M: P P Γ ⇒ cut3 (P, M, cut2 (P, M, x. Ms)) : R
cut4 (P, M, x.(λ y. M ′ )) = λ y.cut4 (P, M, x. M ′ )
… Γ , x:P, y:S ⇒ M ′:T … Γ ⇒ M : P Γ , x:P ⇒ λ y. M ′:S ⊃ T Γ ⇒ cut4 (P, M, x. λ y. M ′) : S ⊃ T a … … Γ⇒M: P Γ , y:S ⇒ M : P Γ , x:P, y:S ⇒ M ′:T Γ , y:S ⇒ cut4 (P, M, x. M ′) : T Γ ⇒ λ y.cut4 (P, M, x. M ′) : S ⊃ T
Page 28