New Relevant Logic ER Noriaki Yoshiura Computer Center, Gunma University 1-5-1 Tenjincho, Kiryu City, Gunma Prefecture
[email protected] Phone number: +81-277-30-1160 Fax number: +81-277-30-1161
Relevant logic has been researched for removal of the fallacies of implication from classical logic. Many kinds of relevant logics have been proposed, but these logics are weaker than necessary and unsuitable for the formalization of knowledge reasoning. This paper presents a new relevant logic ER. We give two characteristics of ER; the first is that variable-sharing, which is a necessary condition for relevant logics, holds in ER, and the second is that ER excludes fallacies of relation and validity, which are considered to be strong fallacies and removed from most relevant logics. Further, we show that ER is a stronger logic than the typical relevant logic R, especially, some classical logic theorems that can not be inferred in R and do not include fallacies can be inferred in ER. It follows that ER is more suitable for formalization of knowledge reasoning.
1
Introduction
Formalization of knowledge reasoning is one of the main issues in artificial intelligence. Logic is a useful method for this formalization, however, classical logic is not sufficient because the meaning of implication in classical logic is different from that in daily speech[2]. For example, for an arbitrary formula A, A→B can be inferred from B in classical logic with no relationship between A and B necessary. In the view of daily speech, however, this relationship is necessary for the inference of A→B. Relevant logic is studied so that such fallacies of implication can be removed from classical logic. In [2], variable-sharing is suggested to be a necessary condition of relevance of implication. VariableSharing means that if A→B is a theorem, then A and B share the same atomic proposition. [2] also suggested that A→B does not refer to the truth value of A or B.
In [2, 10] fallacies of implication are classified as fallacies of relevance, of validity or of necessity. Fallacies of relevance and validity are considered to be strong fallacies, which are removed from the most relevant logics. As a method of formalization of knowledge reasoning, relevant logic is more suitable than classical logic in the sense that fallacies of implication are removed[4]. However, removal of fallacies makes relevant logics weak with respect to provability. For example, although A→(B→A ∧ B) or (A ∧ B→C)→(A→(B→C)) includes no fallacies, they are not theorems in most relevant logics. This paper proposes a relevant logic ER, which is a sequent style natural deduction system. Attributes are used for removal of fallacies in the proof of ER. This logic does not include fallacies of relevance and validity, and is stronger than the typical relevant logic R. In ER, the disjunctive syllogism, which is a natural inference rule in knowledge reasoning[2], also holds. From the view of the meaning of implication in knowledge reasoning, implication holds if premise is necessary for conclusion to hold, in other words, A→B holds if A is necessary for B to hold, and in daily speech, implication holds when premise is necessary for the conclusion to hold[2]. Although it is desirable for the meaning of the formula A→B to be that B can be deduced from A, this is not the meaning of the formula A→B in classical logic. For example, in classical logic, A→(B→A ∧ B) can be deduced as follows: [A] [B] ∧I A∧B →I B→A ∧ B →I A→(B→A ∧ B)
In this proof, A and B play a role in the deduction of A∧B. Thus, A→(B→A ∧ B) should be a theorem in relevant logic, however, R can not prove this formula. In ER, A→(B→A ∧ B) is a theorem and the meaning of implication in ER is more suitable for knowledge reasoning than that in R.
2 2.1
Relevant Logic and its problems Relevant Logic
The purpose of relevant logic is to remove fallacies of implication. These fallacies are as follows:[10] 1. Fallacy of relevance This fallacy occurs when there is no relation between premise and conclusion. The following formula is a typical example. A→(B→A) In classical logic, if A is true, B→A is true. However, B can be any formula and it is not necessary that B has no relationship with A. In such cases, it is strange that A→(B→A) holds. 2. Fallacy of validity In classical logic, if A is false or B is true, then A→B is true. For example; A→(B ∨ ¬B) (A ∧ ¬A)→B In these formulae, premise and conclusion are not related and it is strange that these formulae hold. 3. Fallacy of necessity In classical logic, A→((A→B)→B) is a theorem and if A is true, then (A→B)→B is true for any formula B. A→((A→B)→B) includes fallacy in the sense that a relationship between A and B is not necessary. Variable-sharing is the condition that A and B of A→B share the same atomic proposition. This is the most necessary condition for relevance of implication. The beginning of the study of relevant logic was the proposal of strict implication by Lewis[8]. Only fallacy of relevance is removed from strict implication [6, 7]. Church[5] and Moh[9] proposed the implicational fragment system R→ . From R→ , fallacies of relevance and validity are removed. Ackermann also proposed entailment[1]. Anderson and Belnap proposed the systems R, which includes R→ , and E, which includes entailment[2]. All fallacies are removed from E. Many other kinds of relevant logics have been proposed[3]. Fallacies of relevance and validity are considered to be strong fallacies. They are removed from almost all relevant logics. R is a typical logic which excludes fallacies of relevance and validity[2].
2.2
Problem
Although many kinds of relevant logics have been proposed, they are weak in the viewpoint of provability. For instance, the following formulae are not theorems in relevant logic [2]. 1. A→(B→A ∧ B) 2. A→A ∧ (B ∨ ¬B) In relevant logic, A ∧ B→A is a theorem and a syllogism holds. If (1) is a theorem, A→(B →A) including fallacy of relevance is a theorem. If (2) is a theorem, A→(B∨¬B) including fallacy of validity is a theorem. Thus, (1) and (2) are not theorems in relevant logic. However, since these formulae themselves do not include fallacies, these formulae should be theorems in relevant logic. Disjunctive syllogism (DS) does not hold in relevant logic because A ∧ ¬A→B can be deduced by disjunctive syllogism like FIGURE 1 in [8]. A
¬A ∨ B DS B
¬A
A∨B DS B
[A ∧ ¬A] ∧E [A ∧ ¬A] ¬A ∧E ∨I A ¬A ∨ B DS B →I A ∧ ¬A→B
Fig. 1. Proof of A ∧ ¬A→B by using DS
The inference rules in this proof are Implication Introduction(→I), Or Introduction(∨I), And Elimination(∧E) and DS. In [8], DS is considered to be the cause of fallacy of validity and DS does not hold in relevant logics. Thus some classical logic theorems which do not include fallacies and are deduced by DS are not theorems in relevant logic. As described above, removal of fallacies makes relevant logics weak in the viewpoint of provability.
[A] →I B→A →I A→(B→A)
Fig. 2. Proof of formula including fallacy of relevance by→I
3
Relevant Logic ER
This section presents relevant logic ER which solves the problems described above. ER is constructed by modifying a natural deduction system of classical logic, introducing attributes to formulae and adding attribute operation rules to each inference rule. Use of attributes prevents deduction of formulae including fallacies and enable many classical logic theorems excluding fallacies to be theorems of ER. In the following, modification of inference rules and attributes are explained and we define ER. 3.1
How to construct ER
This section discusses some inference rules related to deducing fallacies and presents a method for preventing the deduction of fallacies. Implication Introduction (→I) In classical logic, A→(B→A) can be deduced like FIGURE 2. A→(B→A) includes fallacy of relevance. The cause of generating fallacy is that hypotheses of B are not used in the deduction of B→A from A. In relevant logic, this proof does not hold because of the restriction that hypotheses that are discharged 1 in Introduction Implication (→I) must exist. This restriction is necessary in ER. ⊥E and DS In order to remove fallacy of validity, ⊥E can not hold in ER because ⊥E plays a role in deducing A ∧ ¬A→B. ⊥ ⊥E A
1
Discharge is operation of removing hypotheses from a proof.
[A] [B] ∧I A∧B ∧E A →I B→A →I A→(B→A)
Fig. 3. Redundant proof
DS can infer A ∧ ¬A→B like FIGURE 1 and DS is not inference rule to prevent deducing fallacy, in [8]. Thus, if DS is an inference rule in ER, then some restrictions are necessary to prevent deducing fallacy. We think that in FIGURE 1, the cause of deducing fallacy is assumed to be inference from ¬A to ¬A ∨ B because this inference introduces B which is not related to the hypotheses A∧¬A. Although ∨I is right inference rule, a restriction is imposed on usage of ∨I for preventing deduction of A∧¬A→B. The restriction is that conclusion of ∨I can not be a major premise 2 of DS.
A restriction to prevent redundant inference In FIGURE 3, in spite of the above restriction on→I, a formula including fallacy is deduced. In FIGURE 3, A ∧ B is deduced from A and B and A is deduced from A ∧ B. This redundant inference is the cause of inference including fallacies. The inferences in FIGURE 3 are the same as this redudant inference and can be the cause of deducing fallacies. These redundant inferences are that the conclusion of introduction rules is used as the premise of elimination rules and such inferences introduce unnecessary hypotheses. Thus these inferences must be forbidden. In R, if A∧B is deduced from A and B, then the hypotheses of A and B must be the same. Under this restriction and the restriction on→I, the proof of FIGURE 3 does not hold in R. However, A→ (B→A ∧ B) is not a theorem because of these restrictions. On the other hand, the restriction in ER enables A → (B → A ∧ B) to be a theorem and forbids the proof of 3. Since the restriction in ER forbids a redundant proof, it is natural. 2
A ∨ B in
A∨B ¬A B
[A] .. .. A B A B B ∧I ∧I →I A∧B A∧B A A→B ∧E ∧E →E A B B [A] [B] [A] [B] .. .. .. .. . .. . .. B A . . ∨I ∨I C C C C A∨B A∨B ∨E ∨E C C
Fig. 4. Redundant inference [¬A] [B] ∧I [¬(¬A ∧ B)] ¬A ∧ B →E ⊥ RAA A →I [¬(B→A)] B→A →E ⊥ RAA ¬A ∧ B ∧E [A] ¬A →E ⊥ RAA B→A →I A→(B→A)
Fig. 5. Deduction of fallacy of relevance by RAA
Reductio ad absurdum Reductio ad absurdum (RAA) is the following inference rule. [¬A] .. .. ⊥ RAA A
A → (B → A) including fallacy of relevance can be deduced by this inference rule in FIGURE 5 in spite of the restrictions described above. The subproof enclosed by dotted line in FIGURE 5 seems to be the cause of inference of the formula. Since the implication formula does not seem to refer to the truth value of constituent propositions of implication formula in relevant logic, ¬(B→A) can not be deduced from B and ¬A. In fact, ¬(B → A) → B ∧ ¬A is not a theorem in the relevant logic R. The subproof enclosed by dotted line in
[A] ∨I A ∨ ¬A
[¬(A ∨ ¬A)] →E ⊥ →I ¬A ∨I A ∨ ¬A [¬(A ∨ ¬A)] →E ⊥ RAA A ∨ ¬A
Fig. 6.
FIGURE 5 is not allowed in the relevant logic R because of the ∧I restriction described above. However, because of the same restriction, A→(B→A ∧ B) is also not a theorem in R. In FIGURE 5, the conclusion of RAA is a major premise of And Elimination (∧E) and ¬A is a conclusion of ∧E. This inference seems to be the same as the inference that ∧E infers ¬A from ¬A∧B which is infered by ∧I, and this inference is one of redundant inferences described above. Thus, we impose on the following restriction on RAA: if the hypotheses ¬A discharged by RAA is the major premise 3 of→E, the minor premise 4 is not a conclusion of I-rule 5 This restriction forbids the strange relationship between the hypotheses of the proof of ¬A and the hypotheses of the proof of B in FIGURE 5. Under this restriction, although ¬(B→A)→B ∧ ¬A is not a theorem, A→(B→A ∧ B) is a theorem. Exclusive middle In general, exclusive middle holds in the case that RAA holds. FIGURE 6 shows this fact. However, the last application of RAA in this proof is forbidden under the RAA restriction introduced above. Thus the inference rules in FIGURE 7 are introduced explictly so that exclusive middle holds. Exclusive middle holds because of these inference rules. 3.2
Introduction of attribute
Among the five problems described above, the problems of→I are solved by resctriction on→I and that of exclusive middle are solved 3 4 5
the major premise means A→B in the inference rule A→BB the minor premise means A in the inference rule A→BB A . I-rule means Introduction rules of natural deduction.
A
.
[¬A] .. .. B EM 1 A∨B
[¬B] .. .. A EM 2 A∨B
Fig. 7.
by addition of inference rules (EM 1, EM 2). It is necessary to impose some restrictions on several inference rules to solve the other problems. In order to solve the problems of DS and redundant inference, we introduce such a restriction that the conclusions of the I-rules are not the major premise 6 . Realization of this restriction requires the classification of formulae in proof according to the possibility of being major premises of the E-rules. In order to solve the problem of RAA, it is necessary to classify hypotheses according to the possibility of being discharged by RAA. All hypotheses can be major premises of E-rules because hypotheses are not the conclusions of I-rules. Moreover, hypotheses can be classified into these which can be discharged by RAA and these which can be not. Thus, we classify formulae in proof into those that can not be premises of I-rules (A) and those that can be premises of E-rules. Moreover, we classify formulae which can be premises of an E-rule into those which can be discharged by RAA (A) and other formulae (C). 1. Formulae which can not be major premises of E-rules · · · (A) 2. Formulae which can be major premises of E-rules. This kind of formulae are also classified into two kinds. (a) Hypotheses that can be discharged by RAA · · · (B) (b) Other formulae · · · (C) ER uses attribute in order to classify formulae. That is to say, attribute specifies the kind of formulae, with the following attribute values are used in accordance with the above classification. 1. i · · · (A) 2. r · · · (B) 6
The major premise of an E-rule with one premise is the premise of the rule. Major premise of ∨E is A ∨ B in A∨B CC C .
3. e · · · (C) The restriction is realized by attaching attribute calculus to an inference rule and reflecting attribute to application-ability of the inference rule. In the following, ϕ, ϕ1 , ϕ2 , · · · , φ, φ1 , φ2 , · · · are used as meta variables of the attribute value. 3.3
Relevant Logic ER
Definition 1 (Formulae of ER) Atomic propositions are formulae of ER. If A and B are formulae of ER, then ¬A, A ∧ B, A ∨ B and A→B are formulae of ER. Definition 2 (Attribute values of formula) e, i and r are defined as attribute values of a formula. e means that a formula with such an attribute value can be major premise of elimination rules. i means that a formula with such an attribute value can not be major premise of elimination rules. r means that a formula with such an attribute value can be discharged by the rule RAA. If A is a formula and ϕ is an attribute, A : ϕ is defined as an attribute formula. In the following, ϕ, ϕ1 , · · ·, φ,φ1 , · · · are used as meta variables of attribute value. A : ϕ is called attributed formula, where A is a formula and ϕ is an attribute value. Definition 3 If Γ is a multiset of attribute formulae and A is an attribute formula or an empty, Γ ` A is defined as sequent. Proofs of ER are defined as an infinite sequence of sequents F1 , · · · , Fn , which satisfy the following conditions. 1. Fi is an axiom. 2. Fi is deduced by one of inference rules of FIGURE 8 using F1 , · · · Fi−1 as premises. Definition 4 (Theorem of ER) If F1 , · · · , Fn is a proof and Fn is ` A : ϕ, then A is a theorem of ER. All right side attribute values of the conclusion of I-rules in FIGURE 8 are i, and this conclusion can not be the major premise of E-rules. If a sequent is a major premise of an E-rule, then its right side attribute value must be e.
A : e ` A : e Axiom
Γ `A∧B :e ∧E1 Γ `A:e
Γ `A:ϕ ∨I1 Γ `A∨B :i
Γ `A∧B :e ∧E2 Γ `B:e
Γ ` A : ϕ1 ∆ ` B : ϕ2 ∧I Γ, ∆ ` A ∧ B : i Γ `B:ϕ ∨I2 Γ `A∨B :i
Γ `A∨B :e
∆1 , A : e ` C : e ∆2 , B : e ` C : e ∨E1 Γ, ∆1 , ∆2 ` C : e
Γ `A∨B :e
∆1 , A : e ` C : i ∆2 , B : e ` C : ϕ ∨E2 Γ, ∆1 , ∆2 ` C : i
Γ `A∨B :e
∆1 , A : e ` C : ϕ ∆ 2 , B : e ` C : i ∨E3 Γ, ∆1 , ∆2 ` C : i
Γ `A∨B :e
∆1 , A : e ` Γ, ∆1 , ∆2 `
Γ, A : e ` B : ϕ →I Γ ` A→B : i
Γ, A : ϕ, A : ϕ ` C1 Γ, A : ϕ `
∆2 , B : e `
∨E4
Γ ` A→B : e ∆ ` A : ϕ →E Γ, ∆ ` B : e
Γ ` A : ϕ ∆ ` ¬A : e ¬E1 Γ, ∆ ` Γ, A : e ` ¬I Γ ` ¬A : i
Γ, ¬A : r ` RAA Γ `A:e
¬A : r ` ¬A : r Axiom
8
Γ ` A : e ¬A : r ` ¬A : r ¬E2 Γ, ¬A : r `
Γ, A : ϕ, A : ϕ ` B : φ C2 Γ, A : ϕ ` B : φ
Γ, ¬A : e ` B : ϕ EM 1 Γ `A∨B :i Γ ` A ∨ B : e ∆, A : e ` DS1 Γ, ∆ ` B : e
Γ, ¬A : e, ¬A : r ` C3 Γ, ¬A : r `
Γ, ¬B : e ` A : ϕ EM 2 Γ `A∨B :i Γ ` A ∨ B : e ∆, B : e ` DS2 Γ, ∆ ` A : e
Fig. 8. The inference rules of ER
4
Characteristics of ER
This section proves that variable-sharing holds and that fallacies of relevance and validity do not exist in ER. In addition, this section shows that some formulae that cannot be deduced in R are deduced in ER. Initially, lemmas necessary for the proof of thses characteristics are presented. Set operators7 are used for multisets. Definition 5 (Subformula) Subformulae of A is a formula occurring in A. A v B if and only if A is a subformula of B and B is not ¬A. A < B if and only if A is not A or ¬A. 7
∈, ⊆
Definition 6 A ∼ B holds if and only if A and B share the same atomic proposition. Moreover, A ∼∗ B in the multiset Γ of formulae holds if and only if Γ contains C1 , · · · , Cn , A and B such that A ∼ C1 , C1 ∼ C2 , · · · , Cn ∼ B. Definition 7 Suppose that Γ is a multiset of formulae. It is said that Γ is connected if and only if A ∼∗ B for any formulae A and B in Γ . Next, we prove some lemmas to prove that fallacies of implications are removed in ER and that Variable-Sharing holds in ER. Lemma 1 Suppose that a sequent F is deduced in ER. (1) If a sequent F is Γ ` A : e, then all of the followings hold. (A) If ¬B : r ∈ Γ , then Γ contains Q : e such that B < Q. (B) Γ contains P : e such that A < P , or Γ = {A : e} (2) If F is Γ ` A : i and ¬B : r ∈ Γ , then one of the followings holds. (A) Γ contains Q : e such that B < Q. (B) B < A (C) A is ¬B. (3) If F is Γ ` A : r, then Γ = {A : r}. (4) If F is Γ ` and ¬B : r ∈ Γ , then one of the following holds. (A) Γ contains Q : e such that B < Q. (B) Γ = {B : e, ¬B : r}. Proof of Lemma 1: We prove this lemma by induction on the proof P of F . 1. P consists of just F , then obviously F is inferred by Axiom. Thus, the claim in this case is trivial. 2. P ends with an application of RAA. In this case, we can suppose that F is Γ ` A : e. (a) We prove that (A) of (1) in the claim holds. Suppose that ¬P : r ∈ Γ . By the induction hypothesis, one of the followings holds. i. Γ ∪ {¬A : r} contains Q : e such that P < Q.
ii. Γ ∪ {¬A : r} = {P : e, ¬P : r}. However, (ii) does not hold because ¬P : r ∈ Γ and Γ ∪{¬A : r} contains more than two attribute formulae whose attribute is r. Thus, Γ contains Q : e such that P < Q. (b) We prove that (B) of (1) holds. By the induction hypothesis, one of the followings holds. i. Γ ∪ {¬A : r} contains P : e such that A < P . ii. Γ ∪ {¬A : r} = {A : e, ¬A : r}. In the case of (i), Γ contains P : e such that A < P , and in the case of (ii), Γ is {A : e}. Thus, (B) of (1) holds. 3. P ends with an application of ∧I, ∨I1, ∨I2, ∧E1 or ∧E2. It is trivial by the induction hypothesis. 4. P ends with an application of ∨E1, ∨E2, ∨E3 or ∨E4. We give the proof for the ∨E1 case. The other cases can be proved in the same way. In the ∨E1 case, we can suppose that F is Γ ` A : e. (a) We prove that (A) of (1) holds. Suppose that ¬P : r ∈ Γ ∪ ∆1 ∪ ∆2 . If ¬P : r ∈ Γ , then by the induction hypothesis, Γ contains Q : e such that P < Q. If ¬P : r ∈ ∆1 , then by the induction hypothesis, ∆1 contains Q : e such that P < Q. If ¬P : r ∈ ∆2 , then by the induction hypothesis, ∆2 contains Q : e such that P < Q. Therefore, Γ ∪ ∆1 ∪ ∆2 contains Q : e such that P < Q. We have proved that (A) of (1) holds in this case. (b) We prove that (B) of (1) holds. By the induction hypothesis, one of the followings holds. i. ∆1 ∪ {A : e} contains P : e such that C < P . ii. ∆1 ∪ {A : e} = {C : e}. In the case of i, if P : e is not A : e, then P : e is in Γ ∪∆1 ∪∆2 . Otherwise, by the induction hypothesis, Γ contains Q : e such that A ∨ B v Q, and C < Q. This shows (B) of (1) holds. In the case of ii, C : e is A : e. By the induction hypothesis, Γ contains Q : e such that A ∨ B v Q. Thus, A < Q and (B) of (1) holds. 5. P ends with an application of→I. In this case, we can suppose that F is Γ ` A→B : i. Suppose ¬P : r ∈ Γ . The premise of F is Γ, A : e ` B : ϕ. Γ, A : e is
not B : e because Γ is not an empty set. Thus, ϕ is not r by the induction hypothesis. No matter which ϕ is i or e, by the induction hypothesis, one of the followings holds. (a) Γ ∪ {A : e} contains Q : e in such that P < Q. (b) P < B. (c) B is ¬P . In the case of (a), if Q is A, then P < A→B and (B) of (2) holds. Otherwise, Q : e is in Γ , and therefore (A) of (2) holds. In the case of (b) or (c), P < A→B, therefore (B) of (2) holds. 6. P ends with an application of→E. In this case, we prove (A) and (B) of (1) in the claim. (a) We prove that (A) of (1) holds. Suppose that ¬P : r ∈ Γ ∪ ∆. If ¬P : r ∈ Γ , by the induction hypothesis, then Γ contains Q : e such that P < Q. Therefore, (A) of (1) holds. If ¬P : r ∈ ∆, then we have two cases; one is that ϕ is e or i, and the other is that ϕ is r. In the first case, by the induction hypothesis, one of the followings holds. i. ∆ contains Q : e such that P < Q. ii. P < A. iii. A is ¬P . In the case of (i), (A) of (1) holds. In the case of (ii) or (iii), by the induction hypothesis, Γ contains R : e such that A→ B < R, and P < R. Therefore, (A) of (1) holds. In the case that ϕ is r, by the induction hypothesis, ∆ ` A : ϕ is ¬P : r ` ¬P : r and A is ¬P . Moreover, by the induction hypothesis, Γ contains R : e such that ¬P → B v R, and P < R. Therefore (A) of (1) holds. (b) We prove that (B) of (1) holds. By the induction hypothesis, Γ contains R : e such that A→ B v R, and B < R. Therefore, (B) of (1) holds. 7. P ends with an application of ¬E1. Suppose that ¬P : r ∈ Γ ∪ ∆. In the case that ¬P : r ∈ Γ , if ϕ is e or i, then one of the followings holds by the induction hypothesis. (a) Γ contains Q : e such that P < Q.
(b) P < A. (c) A is ¬P . In the case of (a), (A) of (4) holds. In the case of (b) or (c), by the induction hypothesis, ∆ contains R : e such that P < R. Therefore, (A) of (4) holds. If ϕ is r, then by the induction hypothesis, Γ ` A : ϕ is ¬P : r ` ¬P : r and A is ¬P . By the induction hypothesis, ∆ contains R : e such that ¬A v R. Thus, ¬¬P v R. It follows that Γ ∪ ∆ contains R : e such that P < R. Therefore (A) of (4) holds. In the case that ¬P : r ∈ ∆, by the induction hypothesis, there exists ∆ contains Q : e such that ¬P < Q, and Q : e is in Γ ∪ ∆. Therefore, (4) of (4) holds. 8. P ends with an application of ¬E2. Suppose ¬P : r ∈ Γ ∪ {¬A : r}. In the case that ¬P : r ∈ Γ , by induction hypothesis, Γ contains Q : e such that P < Q. Therefore (A) of (4) holds. In the case that ¬P : r is ¬A : r, by the induction hypothesis, Γ contains R : e such that A < R, or Γ = {A : e}. If R : e exists, then Γ contains R : e such that P < R. Thus, (A) of (4) holds. If Γ = {A : e}, the Γ, ∆ ` is A : e, ¬A : r `. Thus, (B) of (4) holds. 9. P ends with an application of ¬I. Suppose ¬P : r ∈ Γ . By the induction hypothesis, one of the followings holds. (a) Γ ∪ {A : e} contains Q : e such that P < Q. (b) Γ ∪ {A : e} = {¬P : r, P : e}. In the case of (a), if Q is not A, then Γ contains Q : e. Therefore (A) of (2) holds. Otherwise, P < ¬A. Therefore, (B) of (2) holds. In the case of (b), P is A and the conclusion of this inference rule is ¬A : r ` ¬A : i. Therefore, (C) of (2) holds. 10. P ends with an application of EM 1 or EM 2. We give the proof for the case of EM 1. The other case can be proved in the same way. If ϕ is r, then by the induction hypothesis, Γ, ¬A : e ` B : ϕ is B : r ` B : r. However, this is inconsistent. Therefore ϕ is e or i. Suppose that ¬P : r ∈ Γ . Then, one of the followings holds. (a) Γ ∪ {¬A : e} contains Q : e such that P < Q.
(b) P < B. (c) B is ¬P . In the case of (a), if Q : e is not ¬A : e, then Q : e is in Γ . Therefore (A) of (2) holds. Otherwise, P < ¬A and P < A ∨ B. Therefore (B) of (2) holds. In the case of (b) or (c), P < A ∨ B. Therefore (B) of (2) holds. 11. P ends with an application of DS1 of DS2. We give the proof for the case of DS1. The other case can be proved the same way. (a) We prove that (A) of (1) holds. Suppose that ¬P : r ∈ Γ ∪ ∆. If ¬P : r ∈ Γ , then by the induction hypothesis, Γ contains Q : e such that P < Q. Therefore (a) of (1) holds. If ¬P : r ∈ ∆, by the induction hypothesis, one of the followings holds. i. ∆ ∪ {A : e} contains Q : e such that P < Q. ii. ∆ ∪ {A : e} = {P : e, ¬P : r}. In the case of (i), if Q : e is not A : e, then Q : e ∈ ∆. Therefore (A) of (1) holds. Otherwise, by the induction hypothesis, Γ contains R : e such that A ∨ B < R. It implies P < R. Therefore (A) of (1) holds. In the case of (ii), P is A. By the induction hypothesis, Γ contains R : e such that A ∨ B v R. It follows that Γ ∪ ∆ contains R : e such that P < R. Therefore (A) of (1) holds. (b) We prove that (B) of (1) holds. By the induction hypothesis, Γ contains R : e such that A ∨ B v R. Therefore Γ ∪ ∆ contains R : e such that B < R. 12. P ends with an application of C1, C2 or C3. We give the proof for the case of C3. The other cases can be proved in the same way. By the induction hypothesis, one of the followings holds. (a) Γ ∪ {¬A : e, ¬A : r} contains S : e such that A < S (b) Γ ∪ {¬A : e, ¬A : r} = {A : e, ¬A : r}. Since the case of (b) does not hold, the case of (a) holds. Thus, S : e is not ¬A : e and Γ contains S : e such that A < S. Suppose that ¬P : r ∈ Γ ∪{¬A : r}. By the induction hypothesis, one the following holds. (a) Γ ∪ {¬A : e, ¬A : r} contains Q : e such that P < Q.
(b) Γ ∪ {¬A : e, ¬A : r} = {¬P : r, P : e}. Since S : e ∈ Γ , the number of the elements of Γ ∪ {¬A : e, ¬A : r} is more than three. This implies that the case of (b) does not holds. Thus, the case of (a) holds. If Q is not ¬A, then (A) of (4) holds because Q : e ∈ Γ . Otherwise, P < A and A < S. It follows that Γ ∪ {¬A : r} contains S : e such that P < S. Thus, (A) of (4) holds. Suppose that Γ is a multiset of attribute formulae. Γ e is a multiset of A such that A : e ∈ Γ . The two lemmas follow from Lemma 1. Lemma 2 If Γ ` A : e is proved, then Γ e and Γ e ∪ {A} are connected. If Γ ` A : i is proved, then Γ e ∪ {A} is connected. If Γ ` is proved, then Γ e is connected. Proof of Lemma 2: We prove this lemma by induction on the proof P. 1. P consists of Axiom. The lemma is obvious. 2. P ends with an application of ∧I, ∧E1, ∧E2, ∨I1, ∨I2,→I, ¬I, RAA, ¬E2, EM 1 or EM 2. By the induction hypothesis, the lemma is obvious for these rules. 3. P ends with an application of ∨E1, ∨E2, ∨E3 or ∨E4. We show the proof for the case of ∨E1. The other cases can be proved in the same way. By induction hypothesis, Γ e , ∆e1 ∪ {A} and ∆e2 ∪ {B} are connected. By lemma 1, Γ contains R : e such that A ∨ B v R. It follows that (Γ ∪ ∆1 ∪ ∆2 )e is connected. By lemma 1, ∆e1 ∪ {A} contains S such that C v S. Thus (Γ ∪ ∆1 ∪ ∆2 )e ∪ {C} is connected. 4. P ends with an application of→E. If ϕ is e or i, then by the induction hypothesis, ∆e ∪ {A} is connected and Γ e is so. By lemma 1, Γ contains R : e such that A→B v R. Thus, (Γ ∪ ∆)e and (Γ ∪ ∆)e ∪ {B} are connected. If ϕ is r, then by lemma 1, ∆ ` A : ϕ is ¬C : r ` ¬C : r. Thus, ∆ = {¬C : r} and ¬A is C. (Γ ∪ ∆)e and (Γ ∪ ∆)e ∪ {B} are connected. 8
The right side sequent of premises is the axiom ¬A : r ` ¬A : r because only the axiom concludes the sequent whose right side attribute is r.
5. P ends with an application of ¬E1. If ϕ is e or i, then by the induction hypothesis, Γ e ∪ {A} and ∆e is connected. By lemma 1, ∆ contains R : e such that ¬A v R. Thus (Γ ∪ ∆)e is connected. If ϕ is r, then by lemma 1, Γ ` A : ϕ is A : r ` A : r and Γ = {A : r}. By the induction hypothesis, ∆e ∪{A} is connected, and thus (Γ ∪ ∆)e is connected. 6. P ends with an application of DS1 or DS2. We give the proof for the case of DS1. The other case can be proved in the same way. By the induction hypothesis, Γ e and ∆e ∪ {A} is connected. By lemma 1, Γ contains R : e such that A ∨ B v R. Thus, (Γ ∪ ∆)e and (Γ ∪ ∆)e ∪ {B} are connected. 7. P ends with an application of C1, C2 or C3. We give the proof for the case C2. The other cases can be proved in the same way. We consider each case of the value of φ in C2. (a) If φ is e, then by the induction hypothesis, (Γ ∪ {A : ϕ, A : ϕ})e and (Γ ∪ {A : ϕ, A : ϕ})e ∪ {B} are connected. Thus (Γ ∪ {A : ϕ})e and (Γ ∪ {A : ϕ})e ∪ {B} are connected. (b) If φ is i, then by the induction hypothesis, (Γ ∪ {A : ϕ, A : ϕ})e ∪ {B} is connected. Thus (Γ ∪ {A : ϕ})e ∪ {B} is connected. (c) If φ is r, then by lemma 1, the left side of the conclusion of this rule must consist of only one attribute formulae. Thus, this case never happens. Lemma 3 In ER, the conclusion of the proof of the theorem P is ` P : i. Proof of Lemma 3: Let P be a theorem of ER. Suppose that the conclusion of the proof of P is ` P : e. By lemma 1, there exists Q : e in the left side of the sequent ` P : e such that P v Q. However, the left side of ` P : e is empty. Thus this is inconsistent. It follows that ` P : e is not the conclusion of the proof of P . Suppose that the conclusion of the proof of P is ` P : r. By lemma 1, P : r must be in the left side of the sequent ` P : r. However, the left side of the sequent of ` P : r is empty. Thus this is inconsistent. If follows that ` P : r is not the conclusion of the proof of P .
Therefore, the conclusion of the proof of P is ` P : i. Lemma 3 implies the next lemma. Lemma 4 Axiom,∧E1,∧E2,∨E1,∨E2,∨E3,∨E4,→E, ¬E1, ¬E2, RAA, DS1, DS2, C1, C2 or C3 is not the last inference rule of the theorem proof. Proof of Lemma 4: By lemma 3, the conclusion of a theorem proof is the form of ` A : i. Thus Axiom, ∧E1, ∧E2, ∨E1,→E, RAA, DS1, DS2, C1, C2 or C3 is not a last rule of a theorem proof. Further, ∨E4, ¬E1 or ¬E2 is not the last rule because the right side of the conclusion of these rules is empty. If ∨E2 is the last rule of a theorem proof, then the proof is as follows: .. .. `A∨B :e
.. .. A:e`P :i `P :i
.. .. B:e`P :ϕ
∨E2
In this proof, ` A ∨ B : e is inferred. However, by lemma 3, this is inconsistent. Thus ∨E2 is not the last rule. Similarly, ∨E3 is not the last rule. Lemma 1 implies the next lemma. Lemma 5 If Γ ` A→B : i is proved and Γ is a multiset of attribute atomic formulae, then Γ 0 , A ` B : ϕ is also proved where – Γ ⊆ Γ 0 and – if C : φ ∈ Γ 0 − Γ , then C : φ ∈ Γ . Proof of Lemma 5: Since Γ contains only attribute atomic formulae, Γ 0 does so. By the definition of inference rules, the last rule of the proof of Γ ` A→B : i is→I, ∨E2, ∨E3 or C2. Thus, there exists a following proof such that the last rule of Γ 0 ` A→B : i is→I, ∨E2 and ∨E3. Γ 0 ` A→B :i .. .. C2 is used more than 0 times Γ ` A→B : i
If the last rule is ∨E2 or ∨E3, then the proof of Γ 0 is as follows. Π1 ` C ∨ D : e
Π2 , C : e ` A→B : i Γ 0 ` A→B : i
Π3 , D : e ` A→B : e
∨E2
By lemma 1, Γ 0 contains P : e such that C ∨ D v P . However, Γ 0 contains only atomic attribute formulae, and thus this is inconsistent. Therefore the last rule is not ∨E2. Similarly, the last rule of Γ 0 ` A→B : i is not ∨E3. It follows that the last rule is→I and the proof of Γ 0 is as follows. .. ..
Γ 0, A : e ` B : ϕ Γ 0 ` A→B : i
→I
Thus we have proved that the lemma holds. These lemmas imply the next theorem. Theorem 1 (Variable-sharing). If A → B is a theorem, then A and B share the same atomic proposition. Proof of Theorem 1: By lemma 4, the last rule of a proof of A→B is →I and the proof is as follows. .. .. A:e`B:ϕ →I ` A→B : i
By lemma 1, A ∼ B. Theorem 2 (Removal of fallacy of relevance). Suppose that A is an atomic proposition and B1 , B2 , · · · and Bn (n ≥ 1) are atomic propositions different from A. A→(B1→(B2→· · · (Bn→A) · · ·) is not a theorem in ER. Proof of Theorem 2: Suppose that ` A→(B1 →· · · (Bn →A) · · ·) : i can be inferred. By lemma 5, A : e, B1 : e, · · · Bn : e ` A : ϕ can be inferred. Since A is an atomic proposition, ϕ is not i. By lemma 1, ϕ is not r. Thus, ϕ is e. By lemma 2, {A, B1 , · · · , Bn } is connected, however, A is an atomic different from B1 , · · · , Bn . This is inconsistent. Thus, ` A → (B1 → · · · (Bn → A) · · ·) : i can not be inferred and A→(B1→· · · (Bn→A) · · ·) is not a theorem. Theorem 3 (Removal of fallacy of validity). Suppose that A and B are different atomic propositions. A→(B∨¬B) and (A∧¬A)→ B are not theorems in ER. Proof of Theorem 3: Theorem 1 implies this theorem obviously. In ER, the following formulae are theorems.
1. 2. 3. 4. 5.
A→(B→(A ∧ B)) (A ∧ B→C)→(A→(B→C)) (A ∧ (¬A ∨ B))→B (A ∨ (B ∧ ¬B))→A A→A ∧ (B ∨ ¬B)
These formulae are not theorems of R[2]. Thus ER is not weaker than R. However, we can say that R is stronger than ER. In next section, we prove that all theorems of R can be inferred in ER.
5
Comparison of ER and F R
This section proves that ER is stronger than R. However, it is difficult to prove it by simple comparison between inference rules of ER and F R. This is because different restrictions are put on inference rules of ER and F R and because although ER is a sequent style natural deduction, F R is a Fitch style natural deduction. To prove that ER is properly stronger than F R, this paper uses F R0 , which is a system between ER and F R with respect to system strongness. First, we prove that theorems of F R can be theorems of F R0 . Next, we show that each proof of F R0 can be normalized and a proof of ER exists for each normalized proof of F R0 . These two results concludes that each theorem of F R can be proved in ER. In addition, A→(B→A ∧ B) can be proved in ER and not in F R, thus, we will have proved that ER is properly stronger than F R. 5.1
FR
This subsection describes F R, a natural deduction of relevant logic R [2]. F R is a Fitch style natural deduction in which a set of natural numbers is attached to each formula in a proof and this set decides the ability of application of inference rules. This mechanism prevents fallacies. A set of natural numbers means the hypotheses which are used for the inference of formula with the set of natural numbers. In the following, we introduce the Fitch style natural deduction system [2] and we use a ] b as the union of sets a and b such that a ∩ b = ∅. Definition 8 Suppose that A is a formula and a is a set of natural numbers. Aa is defined as Rank formula.
Definition 9 A proof of F R is defined as a finite sequent F1 , F2 , · · · , Fn of rank formulae satisfying the following conditions. 1. A natural number called ”Rank” is added to Fi (1 ≤ i ≤ n). 2. Fi (1 ≤ i ≤ n) is deduced by one of the following rules using some of F1 , F2 , · · · Fi−1 as premises. 3. The premises which an inference rule uses in order to deduce Fi+1 have the same rank as Fi has. Inference rule – Hyp Hyp may introduce A{k} as a hypotheses. If the formula is at the top of the proof, the rank of this rank formula is 1. If A{k} is introduced after the sequent F1 · · · Fi , then the rank of A{k} is the rank of Fi plus 1. – Rep If Aa occurs in the proof, then Rep may introduce Aa . If Aa is introduced after the sequent F1 · · · Fi , then the rank of Aa is the rank of Fi . – →I → I infers A→Ba from Ba]{k} , where k is the same as the rank of Ba]{k} and A{k} is the nearest formula that are introduced by the rule Hyp. The rank of A → Ba is that of Ba]{k} minus one. – →E → E infers Ba∪b from Aa and A → Bb . The rank of Ba∪b is the same as that of Aa or A → Bb . – ¬I ¬I infers ¬Aa from A → ¬Aa . The rank of ¬Aa is the same as that of A → ¬Aa . – ¬E ¬E infers ¬Ba∪b from ¬Aa and B → Ab . The rank of ¬Ba∪b is the same as that of ¬Aa or B → Ab . – ¬¬I ¬¬I infers Aa from ¬¬Aa . The rank of Aa is the same as that of ¬¬Aa . – ¬¬E ¬¬E infers ¬¬Aa from Aa . The rank of ¬¬Aa is the same as that of Aa .
– ∧I ∧I infers A∧Ba from Aa and Ba . The rank of A∧Ba is the same as that of Aa or Ba . – ∧E ∧E infers Aa or Ba from A ∧ Ba . The rank of Aa or Ba is the same as that of A ∧ Ba . – ∨I ∨I infers A ∨ Ba from Aa or Ba . The rank of A ∨ Ba is the same as that of Aa or Ba . – ∨E ∨E infers Ca∪b from A ∨ Ba , A → Cb and B → Cb . The rank of Ca∪b is the same as that of A ∨ Ba , A → Cb or B → Cb . – ∧∨ ∧∨ infers (A∧B)∨Ca from A∧(B ∨C)a . The rank of (A∧B)∨Ca is the same as that of A ∧ (B ∨ C)a . Definition 10 (Theorem of F R) If A∅ can be deduced, then A is a theorem of F R. In F R, fallacies of relevance and validity are removed and Variablesharing holds[2]. 5.2
F R0
We use F R0 to prove that ER is properly stronger than R with respect to provability. F R0 is a sequent style natural deduction system. In the following, we give some preparations to define F R0 . Definition 11 (Meta rank formula) Meta rank formula is inductively defined as follows: 1. A rank formula is a meta rank formula. 2. If δ is a multiset of meta rank formulae and a is a set of natural numbers, δa is a meta rank formula. 3. If δ and π are multisets of meta rank formulae and Aa is a rank formula, hδ|πi1 , hδ|πi2 and hδ|π|Aa i4 are meta rank formulae. A rank set of a meta rank formula R or of a multiset of meta rank formulae is defined as follows:
γ, ¬A{k} `b]{k} A{k} ` A{k}
γ ` Ab γ ` A ∧ Ba ∧E1 γ ` Aa
γ, A{k} ` Bb]{k}
RAA
γ ` A→Bb
γ ` ¬Aa δ ` Ab ¬E γ, δ `a∪b γ ` Aa ∨I1 γ ` A ∨ Ba
γ ` Ba ∨I2 γ ` A ∨ Ba
γ, δ 1 , δ 2 `a]{k} δ 1 , hγ|δ 2 i
1
`a]{k}
∨D1
γ, A{k} `b]{k} γ ` ¬Ab
γ ` A ∨ Ba
γ, δ ` Aa]b P γ, δ{k} ` Aa∪{k}
∨D4
2
`a]b
∨D2
π 2 , hγ|π 1 i2 `a γ, π 1 , π 2 `a
γ, π 1 , π 2 ` A ∨ Ba
γ, δ ` Aa∪b
O
π 2 ` ¬Bb
`a∪b
π 2 , hγ|π 1 |Aa i4 ` Ba
γ, δ{k} ` Aa]{k}
∧I
¬I
π 1 ` ¬Ab
γ, π 1 , π 2
δ 2 , hγ|π 1 i
`a]b
π 2 ` Ba
π 1 , π 2 ` A ∧ Ba
π 1 , hγ|δ 2 i1 `a∪c
π 1 , hγ|δ 2 i1 ` Aa∪c δ 2 , hγ|π 1 |Aa∪c i4
π 1 ` Aa
γ ` A ∧ Ba ∧E2 γ ` Ba
γ ` A→Ba δ ` Ab →E γ, δ ` Ba∪b
→I
∨D5
δ ` Aa C δ 0 ` Aa
– In ∨D1, rank set of γ must be a and rank sets of δ 1 and δ 2 are {k}. – In ∨D3 and ∨D5, the rank sets of γ ∪ π 1 and γ ∪ π 2 are the same. – In P and O, rank set of γ is a and rank set of δ is b. – In rule C, δ 0 is the result of application of one of the following to the multiset of meta-rank formulae λ which occurs in δ. 1. Replace Cc , Cc with Cc in λ. 2. Replace γb1 , γb2 with (γ 1 ∪ γ 2 )b in λ, where γ 1 and γ 2 are the meta-rank formulae multisets whose rank set is the same. Fig. 9. The inference rules of F R0
∨E
∨D3
1. If R is of the form Aa such that Aa is a rank formula, then a rank set of R is a. 2. If R is of the form δa such that δ is a multi set of meta rank formula and a is a set of natural numbers, then then a rank set of R is a. 3. Rank set of hδ|πi1 , hδ|πi2 and hδ|π|Aa i4 is a rank set of δ. 4. A rank set of a multiset γ of meta rank formulae is the union of the rank sets of each element of γ. Definition 12 In F R0 , γ ` Aa and γ `b are defined as sequent where Aa is a rank formula, b is a set of natural numbers and γ is a multiset of meta rank formulae. A proof of sequent Fn of F R0 is defined as a finite sequence of sequent F1 , F2 , · · · , Fn satisfying the following conditions. 1. Fi (1 ≤ i ≤ n) is deduced by one of the inference rules in FIGURE 9 using F1 , F2 , · · · Fi−1 as premises and F1 , F2 , · · · , Fn−1 must be used for deduction of Fn . In FIGURE 9, A and B denote formulae, A denotes a formula or an empty, and γ, δ, π, · · · denote multisets of meta-rank formulae. 2. Suppose that Fi is deduced by→I, ¬I or RAA and that A{k} is discharged9 of these inference rules for deducing Fi . A{k} does not occur in Fm (i ≤ m ≤ n). 3. If there are rank formulae of the same rank set on the left side of a sequent, then these formulae are the same. Fl , Fl+1 , · · ·, Fm (1 ≤ l ≤ m ≤ n) is defined as a subproof of F1 , F2 , · · · , Fn , where Fl , · · · Fm−1 are used in derivation of Fm as premises. In the definition of a proof of F R0 , some restrictions are put on rank set. This is necessary in order to reduce complexity of normalization of a proof. In F R0 , ∨D1, ∨D2, · · ·, ∨D5, P , O and C are are used to normalize the proofs under the rank set restrictions for inference rule application. Definition 13 (Theorems of F R0 ) Suppose that F1 , F2 , · · ·, Fn is a proof of F R0 and Fn is ` A∅ . A is defined as a theorem of F R0 . 9
We say that ¬A{k} in RAA or A{k} in→I or ¬I is a rank formulae discharged.
Next, the theorems of F R are proved to be the theorems of F R0 . First, we define hypotheses of a proof in F R. Definition 14 (Hypotheses of proof in F R) Suppose that F1 , F2 , · · ·, Fn is a proof F R. If Fi satisfies the followings, then Fi is a hypothesis of this proof. 1. Fi is introduced by inference rule Hyp. 2. The rank of Fk (i ≤ k ≤ n) is not smaller than that of Fi . Lemma 6 Suppose that Aa is deduced in F R. γ ` Aa is deduced in F R0 , where γ is a multiset of the hypotheses of the proof of Aa . Proof of Lemma 6: We prove this lemma by induction. In each induction step, we replace natural numbers in rank set in order to construct the proof of F R0 satisfying 2 and 3 of definition 9. 1. The last inference rule of deducing Aa is Hyp. Obviously, we can prove A{k} ` A{k} in F R0 . 2. The last inference rule of deducing Aa is Rep,→I, ∧E1, ∧E2, ∨I1 or ∨I2. By the induction hypothesis, the claim in these cases is trivial. 3. The last inference rule of deducing Aa is ∧I. Suppose that Aa and Ba are inferred in F R. By the induction hypothesis, γ 1 ` Aa and γ 2 ` Ba are inferred in F R0 , where γ 1 is the multiset of the hypotheses of a proof of Aa and γ 2 is that of Ba in F R. If a natural number x is discharged by the rule RAA,→I or ¬I in the proof of γ 1 ` Aa and exists in the proof of γ 2 ` Ba , then we replace x with a new natural number y which does not occur in the proof of γ 1 ` Aa or γ 2 ` Ba . We apply the same procedure to the proof of γ 2 ` Ba . By this procedure, the proofs of γ 1 ` Aa and γ 2 ` Ba do not contain the same natural number discharged by the rule RAA,→I or ¬I. Thus we obtain the following proof of γ 1 , γ 2 ` A ∧ Ba . γ 1 ` Aa
γ 2 ` Ba
γ 1 , γ 2 ` A ∧ Ba
∧I
Obviously, γ 1 ∪ γ 2 is a multi set of hypotheses of the proof of A ∧ Ba in F R.
4. → E Suppose that A→Ba and Ab are inferred in F R. By the induction hypothesis, γ ` A→Ba and δ ` Ab are inferred in F R0 , where γ is the multiset of the hypotheses of a proof of A→Ba and δ is that of Ab in F R. If a natural number x is discharged by the rule RAA,→I or ¬I in the proof of γ ` A→Ba and exists in the proof of δ ` Ab , then we replace x with a new natural number y which does not occur in the proofs of γ ` A→Ba or δ ` Ab . We apply the same procedure to the proof of δ ` Ab . By this procedure, the proofs of γ ` A→Ba and γ ` Ab do not contain the same natural number discharged by the rule RAA,→I or ¬I. Thus we obtain the following proof of γ, δ ` Ba∪b . γ ` A→Ba δ ` Ab →E γ, δ ` Ba∪b
Obviously, γ ∪ δ is a multiset of hypotheses of the proof of Ba∪b in F R. 5. ¬E Suppose that ¬Aa and B→Ab are inferred in F R. By the induction hypothesis, γ ` ¬Aa and δ ` B →Ab are inferred in F R0 , where γ is the multiset of hypotheses of the proofs of ¬Aa and δ is that of B→Ab in F R. If a natural number x is discharged by the rule RAA,→I or ¬I in the proof of γ ` ¬Aa and exists in the proof of δ ` B→Ab , then we replace x with a new natural number y which does not occur in the proofs of γ ` ¬Aa or δ ` B→Ab . We apply the same procedure to the proof of δ ` B→Ab . By this procedure, the proofs of γ ` ¬Aa and γ ` B→Ab do not contain the same natural number discharged by the rule RAA,→I or ¬I. Thus we obtain the following proof of γ, δ ` ¬Ba∪b . δ ` B→Ab γ ` ¬Aa
B{k} ` B{k}
δ, B{k} ` Ab]{k}
γ, δ, B{k} `(a∪b)]{k} γ, δ ` ¬Ba∪b
→E
¬E
¬I
Obviously, γ ∪ δ is a multiset of hypotheses of the proof of ¬Ba∪b in F R. 6. ¬I Suppose that A→¬Ab is inferred in F R. By the induction hypothesis, γ ` A→¬Aa is inferred in F R0 , where γ is a multiset of the hypotheses of the proof of A→¬Ab in F R. By using a natural
number k which does not occur in the proof of γ ` A→¬Aa , we obtain the following proof of γ ` ¬Aa . γ ` A→¬Aa
A{k} ` A{k}
γ, A{k} ` ¬Aa]{k}
→E
A{k} ` A{k}
γ, A{k} , A{k} `a]{k} γ, A{k} `a]{k} γ ` ¬Aa
¬E
C
¬I
Obviously, γ is the multiset of the hypotheses of the proof of ¬Aa in F R. 7. ¬¬I Suppose that Aa is inferred in F R. By the induction hypothesis, γ ` Aa is inferred in F R0 , where γ is the multiset of the hypotheses of the proof of Aa in F R. By using a natural number k which does not occur in the proof of γ ` Aa , we obtain the following proof of γ ` ¬¬Aa . ¬A{k} ` ¬A{k}
γ ` Aa
γ, ¬A{k} `a]{k} γ ` ¬¬Aa
¬E
¬I
Obviously, γ is the multiset of the hypotheses of the proof of ¬¬Aa in F R. 8. ¬¬E Suppose that ¬¬Aa is inferred in F R. By the induction hypothesis, γ ` ¬¬Aa is inferred in F R0 , where γ is the multiset of the hypotheses of the proof of ¬¬Aa . By using a natural number k which does not occur in the proof of γ ` ¬¬Aa , we obtain the following proof of γ ` Aa . γ ` ¬¬Aa
¬A{k} ` ¬A{k}
γ, ¬A{k} `a]{k} γ ` Aa
¬E
RAA
Obviously, γ is the multiset of the hypotheses of the proof of Aa in F R. 9. ∨E Suppose that A∨Ba , A→Cb and B→Cb are inferred in F R. By the induction hypothesis, γ ` A∨Ba , δ 1 ` A→Cb and δ 2 ` B→Cb are inferred in F R0 , where γ of the multiset of the hypotheses of the proof of A∨Ba , δ 1 is that of A→Cb , and δ 2 is that of B→Cb . If a natural number x is discharged by the rule RAA,→I or ¬I in the proof of γ ` A∨Ba and exists in the proof of δ 1 ` A→Cb or
δ 2 ` B→Cb , we replace x with a new natural number y which does not occur in any of the three proofs. We apply the same procedure to the proof of δ 1 ` A→Cb and δ 2 ` B→Cb . By this procedure, the proofs of γ ` A ∨ Ba , δ 1 ` A→Cb and δ 2 ` B →Cb do not contain the same natural number discharged by the rule RAA,→I or ¬I. By using natural numbers k, l, p, q which do not occur in the proof of γ ` ¬¬Aa , we obtain the proof of γ, δ 1 , δ 2 ` Ca∪b , as shown in Fig.10, where γ ∪δ 1 ∪δ 2 is the multiset of the hypotheses of the proof of Ca∪b . 10. ∧∨ Suppose that A ∧ (B ∨ C)a is inferred in F R. By the induction hypothesis, γ ` A ∧ (B ∨ C)a is inferred in F R0 , where γ is the multiset of the hypotheses of the proof of A∧(B∨C)a in F R. By using natural numbers k, p, q which do not occur in the proof, we obtain the proof of γ, γ ` A∧B ∨Ca as shown in Fig.11, where γ ∪ γ is the multiset of the hypotheses of the proof of Ca in F R. The next lemma follows from Lemma 6. Lemma 7 The theorems of F R are theorems of F R0 . Proof of Lemma 7: If A is a theorem of F R, then A∅ is inferred in F R. By lemma 6, ` A∅ is inferred in F R0 . Thus, A is a theorem of F R0 . 5.3
Normalization of the proof of F R0
This subsection proves a normalization theorem that every proof in F R0 can be reduced to a normal proof. In the proofs of F R0 , since a set of natural numbers and some special inference rules are used in F R0 , it is difficult to normalize a proof of F R0 only by the normalization method of classical logic. In addition, since each normalized proof of F R0 must correspond to a proof of ER, the restriction on RAA of ER must reflect the normalized proofs. Thus, the normalization procedure proposed in this subsection consists of three procedures; the first is a normalization procedure for RAA, the second is an intuitive logic normalization procedure given by [11] and the third is a normalization procedure for the original inference rules of F R0 .
¬A{p} ` ¬A{p} γ ` A∨Ba
{¬A{p}}{k} ` ¬A{k}
P
¬B{q} ` ¬B{q} {¬B{q}}{k} ` ¬B{k}
γ, {¬A{p}}{k} , {¬B{q}}{k} `a]{k} {¬A{p}}{k} , hγ|{¬B{q}}{k} i1 `a]{k} ¬A{p} , hγ|{¬B{q}}{k} i1 `a]{p} δ 1 ` A→Cb
hγ|{¬B{q}}{k} i1 ` Aa
δ 1 , hγ|{¬B{q}}{k} i1 ` Ca∪b
¬C{l} ` ¬C{l} ¬C{l}
, δ 1 , hγ|{¬B
{q}}{k} i1
`(a∪b)]{l}
{¬B{q}}{k} , hγ|¬C{l} , δ 1 i2 `a]{k} ¬B{q} , hγ|¬C{l} , δ 1 i2 `a]{q} δ 2 ` B→Cb
hγ|¬C{l} , δi2 ` Ba
δ 2 , hγ|¬C{l} , δ 1 i2 ` Ca∪b
¬C{l} ` ¬C{l}
¬C{l} , δ 2 , hγ|¬C{l} , δ 1 i2 `(a∪b)]{l} γ, ¬C{l} , ¬C{l} , δ 1 , δ 2 `(a∪b)]{l} γ, ¬C{l} , δ 1 , δ 2 `(a∪b)]{l}
→E
¬E
∨D2 O
RAA
→E
¬E
∨D3 C
RAA
γ, δ 1 , δ 2 ` Ca∪b
Fig. 10.
γ ` A∧(B ∨C)a ∧E γ ` B ∨Ca
¬B{p} ` ¬B{p} {¬B{p}}{k} ` ¬B{k}
P
¬C{q} ` ¬C{q} {¬C{q}}{k} ` ¬B{k}
γ, {¬B{p}}{k} , {¬C{q}}{k} `a]{k} {¬B{p}}{k} , hγ|{¬C{q}}{k} i1 `a]{k} γ ` A∧(B ∨C)a ∧E1 γ ` Aa
¬B{p} , hγ|{¬C{q}}{p} i1 `a]{p} hγ|{¬C{q}}{k} i1 ` Ba
γ, hγ|{¬C{q}}{k} i1 ` A∧Ba {¬C{q}}{k} , hγ|γ|A∧Ba i4 `a]{k} ¬C{q} , hγ|γ|A∧Ba i4 `a]{q} hγ|γ|A∧Ba i4 ` Ca ∨D5 γ, γ ` A∧B ∨Ca
∨D4 O
RAA
Fig. 11.
∧I
∨D1 O
RAA
P ∨E
∨D1 O RAA
P ∨E
First, we give some definitions that are needed for the proof of the normalization theorem of F R0 . Definition 15 (Complexity of formula) Complexity of a formula or a blank is defined as follows: If A is a blank, then the complexity of A is 0. If A is a formula, then the complexity of A is the number of occurrences of atomic propositions and connectives in A. The complexity of a rank formula Aa is the complexity of A. Definition 16 (I-rule chain) We define I-rule to be ∧I, ∨I1, ∨I2, →I, ¬I or ∨D5 in F R0 . In a proof of F R0 , I-rule chain is is the occurrence of a finite sequence of the sequent F1 , F2 , · · · , Fn (n ≥ 1) and satisfies the followings. 1. F1 is a conclusion of I-rule. 2. Fi (2 ≤ i ≤ n) is a conclusion of inference rules P, O, C using Fi−1 as premise. By the definition of I-rule chain, the right side of the members of I-rule chain is the same rank formula. We define the complexity of I-rule chain is defined as the complexity of this same formula. Definition 17 (cut) Cut of a proof of F R0 is defined as I-rule chain F1 , F2 , · · · , Fn (n ≥ 1) with Fn is a major premise 10 of E-rule11 . Since cut is I-rule chain, the complexity of cut is defined like that of I-rule chain. Definition 18 (normalized proof ) In F R0 , a proof P is a normalized proof if and only if P satisfies the followings. 1. The formulae that are discharged ative literal. 2. Cut does not exist in P.
12
by RAA in P are only neg-
In the following, we will prove that every theorem can be deduced using normalized proofs. First, we prove some necessary lemmas. 10
11 12
major premise means γ ` A ∧ Ba of ∧E1 and ∧E2, γ ` A→Ba of→E, γ ` ¬Aa of ¬E and γ ` A ∨ Ba of ∨E E-rule means ∧E1, ∧E2,→E, ∨E and ¬E. If A{k} is rank formula which is discharge by→I, ¬I or RAA, A is discharged by these rules.
Lemma 8 In F R0 , if sequent F is proved, then F can be deduced in the proof in which only negative literals are discharged by RAA. Proof of lemma 8: We define x · ω + y of a proof as RAA number, where – x is a maximum of complexities of formulas discharged by RAA in P. – y is the number of maximum complexity cut in a proof, – ω is a cardinal number. In the following, we show a conversion of a proof of F R0 . By this conversion, we obtain a new proof P 0 from P; the conclusions of these two proof are the same and RAA number of P 0 is less than that of P. 1. In P, ¬(F ∧ G){k} is a maximum complexity formula discharged by RAA. We can suppose there exists the following infernce in P. γ, ¬(F ∧ G){k} `c]{k} γ ` F ∧ Gc
RAA
In the proof, ¬(F ∧ G){k} ` ¬(F ∧ G){k} is introduced by axiom. We replace those occurrences of axiom with the following proof, p and l are place holders for natural numbers. F ∧ G{l} ` F ∧ G{l} ¬F{p} ` ¬F{p}
F ∧ G{l} ` F{l}
¬F{p} , F ∧ G{l} `{p,l} ¬F{p} ` ¬(F ∧ G){p}
∧E1
¬E
¬I
By this replacement, we obtain the proof of γ, ¬F{p} `c]{p} from that of γ, ¬(F ∧ G){k} `c]{k} . Similarly, we obtain the proof of γ, ¬G{q} `c]{q} . These then give the following proof γ, ¬F{p} `c]{p} γ ` Fc
RAA
γ, ¬G{q} `c]{q}
γ, γ ` F ∧ Gc .. . C . γ ` F ∧ Gc
γ ` Gc
RAA
∧I
By replacing a proof of γ ` F ∧ Gc in P with this proof, we can obtain a new proof P 0 , where the conclusions of these two proof are the same and RAA number of P 0 is less than that of P.
2. In P, ¬(F →G){k} is a maximum complexity formula discharged by RAA. We can suppose there exists the following infernce in P. γ, ¬(F→G){k} `c]{k} γ ` F→Gc
RAA
In this proof, ¬(F → G){k} ` ¬(F → G){k} is introduced by axiom. We replace those occurrences of axiom with the following proof where the natural numbers f , g and l are not used as place holders. F→G{l} ` F→G{l} ¬G{g} ` ¬G{g}
F{f} ` F{f}
F{f} , F→G{l} ` G{f,l}
F{f} , ¬G{g} , F→G{l} `{f,g,l}
→E
¬E
¬I
F{f} , ¬G{g} ` ¬(F→G){f,g} {F{f} , ¬G{g}}{k} ` ¬(F→G){k}
P
By this replacement, we obtain the proof of γ, {F{f} , ¬G{g}}{k} `e]{k} from that of γ, ¬(F →G){k} `e]{k} , provided that if the inference rule C replaces ¬(F →G){k} , ¬(F →G){k} with ¬(F →G){k} in the proof of γ, ¬(F→G){k} `e]{k} , we replace {F{f} , ¬G{g}}{k} , {F{f} , ¬G{g}}{k} with {F{f} , ¬G{g}}{k} by using the inference rule C several times in the proof of γ, {F{f} , ¬G{g}}{k} `e]{k} . We can infer γ ` F→Ge from the proof of γ, {F{f} , ¬G{g}}{k} `e]{k} . γ, {F{f} , ¬G{g}}{k} `e]{k} γ, F{f} , ¬G{g} `e]{f,g} γ, F{f} ` Ge]{f} γ ` F→Ge
O RAA
→I
By replacing a proof of γ ` F→Gc in P with this proof, we can obtain a new proof P 0 , where the conclusions of these two proof are the same and RAA number of P 0 is less than that of P. 3. In P, ¬(F ∨ G){k} is a maximum complexity formula discharged by RAA. We can suppose there exists the following infernce in P. γ, ¬(F ∨ G){k} `c]{k} γ ` F ∨ Gc
RAA
In this proof, ¬(F ∨ G){k} ` ¬(F ∨ G){k} is introduced by axiom. We replace those occurrences of axiom with the following proof where the natural number f , g and l are not used as place holders. ¬F{f} ` ¬F{f} F ∨ G{l} ` F ∨ G{l}
P
{¬F{f}}{k}} ` ¬F{k}
¬G{g} ` ¬G{g} {¬G{g}}{k} ` ¬F{k}
F ∨ G{l} , {¬F{f}}{k} , {¬G{g}}{k} `{l,k} {¬F{f}}{k} , {¬G{g}}{k} ` ¬(F ∨ G){k}
P ∨E
¬I
By this replacement, we obtain a proof γ, {¬F{f}}{k} , {¬G{g}}{k} `c]{k} from that of γ, ¬(F ∨ G){k} `c]{k} , provided that if the inference rule C replaces ¬(F ∨ G){k} , ¬(F ∨ G){k} with ¬(F ∨ G){k} in the proof of γ, ¬(F ∨ G){k} `e]{k} , we replace by using the inference rule C several times {¬F{f}}{k} , {¬G{g}}{k} , {¬F{f} }{k} , {¬G{g}}{k} with {¬F{f} }{k} , {¬G{g}}{k} in the proof of γ, {F{f}}{k} , {¬G{g}}{k} `e]{k} We obtain the following proof from γ, {¬F{f}}{k} , {¬G{g}}{k} `c]{k} . γ, {¬F{f}}{k} , {¬G{g}}{k} `c]{k} {¬F{f}}{k} , hγ|{¬G{g}}{k} i1 `c]{k} ¬F{f} , hγ|{¬G{g}}{k} i1 `c]{f} hγ|{¬G{g}}{k} i1 ` Fc {¬G{g}}{k} , hγ|∅|Fc i4 `c]{k} ¬G{g} , hγ|∅|Fc i4 `c]{g}
∨D1 O
RAA ∨D4 O
RAA hγ|∅|Fc i4 ` Gc ∨D5 γ ` F ∨ Gc
By replacing a proof of γ ` F ∨ Gc in P with this proof, we can obtain a new proof P 0 , where the conclusions of these two proof are the same and RAA number of P 0 is less than that of P. 4. In P, ¬¬F{k} is a maximum complexity formula discharged by RAA. We can suppose there exists the following infernce in P. γ, ¬¬F{k} `c]{k} γ ` ¬Fc
RAA
In this proof, ¬¬F{k} ` ¬¬F{k} is introduced by axiom. We replace those occurrences of axiom with the following proof where the natural number l is not used as a place holder. In addition, we use different l for every replacement. ¬F{l} ` ¬F{l}
F{k} ` F{k}
F{k} , ¬F{l} `{k,l} F{k} ` ¬¬F{k}
¬I
¬E
By this replacement, we obtain a proof of γ, F{k} `c]{k} from that of γ, ¬¬F{k} `c]{k} . γ, F{k} `c]{k} γ ` ¬Fc
¬I
By replacing a proof of γ ` ¬Fc in P with this proof, we can obtain a new proof P 0 , where the conclusions of these two proof are the same and RAA number of P 0 is less than that of P. As described above, we can convert any proof into the same conclusion proof where only negative literal formulae are discharged by RAA. Thus, in F R0 , if sequent F is proved, then F can be deduced in the proof in which only negative literals are discharged by RAA. Next, we will prove that if there is a proof in which only negative literals are discharged by RAA, then the same conclusion can be proved in the proof in which only negative literals are discharged by RAA and no cut exists. In order to prove this, we will introduce some definitions and lemmas. Definition 19 (normalization number) Suppose that x is a maximum of complexities of cut and y is the number of maximum complexity cut in a proof. x · ω + y is called the normalization number of a proof, where ω is a cardinal number. Lemma 9 If a subproof P satisfies the followings, – P infers ` B∅ from ψ ` Aa . – π 1 ∪ π 2 occur in ψ where π 1 and π 2 are multisets of meta rank formulae with the same rank set. then more than one inference rules are used in P. Proof of Lemma 9: If the number of inference rule is 1 in P, then we can infer ` B∅ by applying an inference rule to ψ ` Aa . In this case, by the definition of inference rules, ψ does not include π 1 ∪ π 2 . Thus, more than one inference rules are used in P. Lemma 10 If a subproof P satisfies the followings, – P infers ` B∅ from ψ ` Aa .
– k inference rules are used in P. – π 1 ∪ π 2 occur in ψ where π 1 and π 2 are multisets of meta rank formulae with the same rank set. then there exists a subproof P 0 satisfying the followings. – – – –
P 0 infers ` B∅ from ψ † ` Aa . ψ † is ψ − π 2 . Less than k inference rules are used in P 0 . The normalization number of P 0 is not more than that of P.
Proof of Lemma 10: We prove this lemma by the induction on the number of inference rules used between ψ ` Aa and ` B∅ . By lemma 9, k ≥ 2. If k = 2, then P is one of the following proof forms. ¬C{l} , ¬C{l} `{l} ¬C{l} `{l} ` B∅
C
RAA
C{l} , C{l} ` A{l} C{l} ` A{l} ` C→A∅
C
→I
We obtain the following proof from these proofs. ¬C{l} `{l} ` B∅
RAA
C{l} ` A{l} ` C→A∅
→I
Thus, the claim of this lemma holds in the case that k = 2. Next, we prove the induction step. Suppose that k = m + 1(m ≥ 2) 1. The case that ∨D1 applies to ψ ` Aa . Suppose that π 1 ∪π 2 ⊆ γ, π 1 ∪π 2 ⊆ δ1 or π 1 ∪π 2 ⊆ δ2 . By the definition of the inference rules, ψ ` Aa is of the form γ, δ 1 , δ 2 `b]c . The conclusion of application of ∨D1 is δ 1 , hγ|δ 2 i1 `b∪d . By the induction hypothesis, we obtain a subproof P 0 satisfying the following. – P 0 infers ` B∅ from ψ ‡ `b∪d that is the result of removing π2 from δ 1 , hγ|δ 2 i1 `b∪d . – Less than m inference rules are used in P 0 . Suppose that ψ † is the result of removing π2 from ψ. We infer ψ ‡ `b∪d from ψ † `b]c by ∨D. Therefore, the claim of this lemma holds. Next, we suppose that π 1 ∪ π 2 6⊆ γ, π 1 ∪ π 2 6⊆ δ 1 and π 1 ∪ π 2 6⊆ δ 2 . Let γ † ∪ δ 1† ∪ δ 2† be the result of removing π 2 from γ ∪ δ 1 ∪ δ 2 . We consider some cases based on γ † ∪ δ 1† ∪ δ 2† .
γ, δ 1 , δ 2 `b]c δ 1 , hγ|δ 2 i .. ..
1
`b]c
φ1 , hγ|δ 2 i1 `b∪d δ 2 , hγ|φ1 i2 `b]c .. .. 2 1 φ , hγ|φ i2 `b∪d γ, φ1 , φ2 `b∪d .. .. ` B∅
γ, δ 1 , δ 2 `b]c
∨D1
δ 1 , hγ|δ 2 i
∨D2
∨D3
` .. 1 b]c .. φ1 , hγ|δ 2 i1 ` Pb∪d
∨D1
δ 2 , hγ|φ1 |Pb∪d i4 `b]c .. .. 2 1 φ , hγ|φ |Pb∪d i4 ` Qb∪d γ, φ1 , φ2 ` P ∨ Qb∪d .. .. ` B∅
∨D4
∨D5
Fig. 12.
γ, δ 2 `b]c .. .. 2 γ, φ ` Qb∪d
γ, δ 2
` .. b]c . . γ, φ2 `b∪d
γ, φ2 ` P ∨ Qb∪d
∨I2
Fig. 13.
γ 1 ` A1a
γ 2 ` A2a
∧I ` A1 ∧ A2a .. =⇒ .. P, O, C δ ` A1 ∧ A2b ∧Ex δ ` Ax a
γ1, γ2
γx ` Ax .. a .. P, O, C δ 0 ` Ax b
Fig. 14.
A{k} ` A{k} .. .. δ, A{k} ` Bb]{k}
→I =⇒ δ ` A→Bb .. .. P, O, C φ ` A→Bd γ ` Ac →E γ, φ ` Bc∪d
Fig. 15.
γ ` Ac P γ{k} ` A{k} .. .. δ, γ{k} ` Bb]{k} . .. P, O, C . φ, γ{k} ` Bd]{k} γ, φ ` Bc∪d
O
(a) The case that δ 1† = ∅. In this case, the proof which begins at γ, δ 1 , δ 2 `b]c is one of the proofs in figure 12. Thus, we obtain the proof in figure 13, which begins at γ, δ 2 `b]c . By the induction hypothesis, there exists a proof which infers ` B∅ from γ, φ2 `b∪d or γ, φ2 ` P ∨ Qb∪d . Thus, there is a proof whose conclusion is ` B∅ and which uses less than k inference rules between γ, δ 2 `b]c and ` B∅ . By this fact and by the induction hypothesis, a proof exists which uses less than k inference rules between γ † , δ 2† `b]c and ` B∅ . (b) The case that δ 2† = ∅. We can prove this case similarly to (a). (c) Other cases By the induction hypothesis, it is obvious. 2. The case that the inference rule C applies to ψ ` Aa . Let ψ 0 ` Aa be the conclusion of the application of C to ψ ` Aa . In the case that π 1 ∪ π 2 occurs ψ 0 , let φ be ψ 0 from which π 2 is removed. By the induction hypothesis, a proof exists which uses less than k inference rules between φ ` Aa and ` B∅ . Moreover, we obtain φ ` Aa by applying the inference rule C to ψ † ` Aa . Thus, there exists a proof which uses less than k inference rules between ψ † ` Aa and ` B∅ . In the case that π 1 ∪ π 2 does not occur in ψ 0 , the following cases are available. (a) (b) (c) (d) (e)
Cc , Cc or γb1 , γb2 occur in π 1 . Cc , Cc or γb1 , γb2 occur in π 2 . Cc ∈ π 1 , Cc ∈ π 2 . γb1 ∈ π 1 , γb2 ∈ π 2 . γb2 ∈ π 1 , γb1 ∈ π 2 .
In (a), let π 1† be π 1 in which we replace Cc and Cc with Cc or γb1 , γb2 with (γ 1 ∪ γ 2 )b . π 1† and π 2 have the same rank set. ψ 0 contains π 1† ∪ π 2 . Let φ be ψ 0 from which π 2 is removed. By the induction hypothesis, there exists a proof which uses less than k inference rules between φ ` Aa and ` B∅ . However, we obtain φ ` Aa by applying the inference rule C to ψ † ` Aa . It then follows that there exists a proof which uses less than k inference rules between ψ † ` Aa and ` B∅ .
We can prove the case of (b) in a similar way to (a). In the case of (c), let φ1 be π 1 − {Cc } and φ2 be π 2 − {Cc }. φ1 and φ2 , Cc occur in ψ 0 . Suppose that λ is φ1 from which Cc is removed. λ and φ2 have the same rank set or λ ∪ {Cc } and φ2 have the same rank set. ψ † is ψ 0 from which φ2 is removed. Thus, by the induction hypothesis, there exists a proof which uses less than k inference rules between ψ † ` Aa and ` B∅ . In the case of (d), let φ1 be π 1 − {γb1} and φ2 be π 2 − {γb2}. φ1 , φ2 , (γ 1 ∪ γ 2 )b occurs in ψ 0 . Let ψ 00 be ψ 0 from which γ 2 is removed. Since γ 1 and γ 2 have the same rank set, there exists a proof uses which less than k inference rules between ψ 00 ` Aa and ` B∅ . Suppose that λ is φ1 from which the elements which have rank set b are removed. λ and φ2 have the same rank set or λ∪{γb1 } and φ2 have the same rank set. Moreover, ψ 0 from which φ2 is removed is ψ † . Thus, by the induction hypothesis, there exists a proof which uses less than k inference rules between ψ † ` Aa and ` B∅ . We can prove the case of (e) similarly to (d). 3. The case that the other inference rules apply to ψ ` Aa Let φ ` Dd be the conclusion of application the other rules to ψ ` Aa . By the definition of inference rules, π 1 ∪ π 2 also occurs in φ. We can infer ` B∅ from φ ` Dd . Thus, by the induction hypothesis, there exists a proof which uses less than k inference rules between ` B∅ and φ† ` Dd from which π 2 is removed. On the other hand, we can prove φ† ` Dd from ψ † ` Aa . It follows that there exists a proof which uses less than k inference rules between ψ † ` Aa and ` B∅ . Obviously, the normalization number of a proof which infers ` B∅ from ψ † ` Aa is less than a proof which infers ` B∅ from ψ ` Aa . The next lemma follows from the above discussion. Lemma 11 For any theorem of F R0 , a normalized proof exists. Proof of Lemma 11: By lemma 8, for each theorem of F R0 , there exists a proof where only negative literals are discharged. We recursively apply the following rules to cut of maximum complexity in the proof until we obtain the proof which does not include cut. 1. The conversion in Figure 14 applies cut whose beginning is a conclusion of ∧I.
2. The conversion in Figure 15 applies cut whose beginning is a conclusion of→I. 3. The conversion in Figure 16 applies cut whose beginning is a conclusion of ¬I. 4. The conversion in Figure 17 applies cut whose beginning is a conclusion of ∨I1 or ∨I2. 5. By using a natural number which is not used in the proof, the conversion in Figure 18 applies cut whose beginning is a conclusion of ∨D5. The above rules from (1) to (4) are based on the rules in [11]. By definition, the beginning of cut is a conclusion of ∧I, ∨I, →I, ¬I or ∨D5. Thus if there is a cut in a proof, one of the five conversion rules above can apply. Since the normalized number of the proof must decrease after application of each conversion rule and the minimum of the normalized number is zero, this procedure must terminate. Further, each of the conversion rules do not add a negative literal discharged by RAA. Thus we obtain the normalized proof for each theorem of F R0 after terminating the procedure. 5.4
Comparison of F R0 and ER
This subsection proves that a proof of ER exists for each normalized proof of F R0 and that theorems of F R0 are theorems of ER. Moreover, we give a formula which can be deduced in ER and not in F R and ER is properly stronger than F R. First, ER0 is defined as ER in which ∨E1, ∨E2 and ∨E3 are removed and C4 is added. γ, ¬A : r, ¬A : e ` B : ϕ C4 γ, ¬A : r ` B : ϕ
Thus the following lemma holds. Lemma 12 A sequent deduced in ER0 can be deduced in ER. Proof of lemma 12: Let A be a theorem of ER0 . If C4 is not used in the proof of theorem A, this proof is one of ER, and thus A is a theorem of ER. If C4 is used in the proof of the theorem A, this proof is like the left side proof of Figure 19. We can obtain the right side proof of Figure 19.
By using this conversion recursively, we obtain the proof of the theorem A, which excludes C4. This proof is also one of ER, and thus A is also a theorem of ER. In the following, we will prove that for a sequent which can be deduced in F R0 , there exists a proof of the sequent in ER. Definition 20 For a multiset γ of meta-rank formulae, γ ∗ is a multiset of attribute formulae which is inductively defined as follows. 1. 2. 3. 4. 5.
{Aa}∗ = {A : e} (Aa is a rank formula.) ∗ ∗ {δa} = δ (δa is a meta-rank formula.) {hπ 1 |π 2 i1}∗ = {hπ 1 |π 2 i2}∗ = π 1∗ ∪ π 2∗ {hπ 1 |π 2 |Aa i4}∗ = π 1∗ ∪ π 2∗ ∪ {¬A : e} (δ 1 ∪ δ 2 )∗ = δ 1∗ ∪ δ 2∗
Lemma 13 Suppose that a normalized proof of γ ` Aa exists in F R0 . Then there exists a attribute value ϕ such that γ ∗ ` A : ϕ can be deduced in ER0 . Moreover, if γ ` Aa is not a component of I-rule chain, ϕ is not i. Suppose that a normalized proof of γ `a exists in F R0 . γ ∗ ` can be deduced in ER0 . Proof of lemma 13: We prove this lemma by induction over the complexity of normalized proofs in F R0 . 1. Axiom The lemma is obvious. 2. ∨D1, ∨D2, ∨D3, ∧I, ∨I1, ∨I2, P, O, C By the induction hypothesis, the lemma is obvious. 3. ∨D4 Suppose that π 1 , hγ|δ 2 i1 ` Aa∪c is inferred in F R0 . By the induction hypothesis, π 1∗ , γ ∗ , δ 2∗ ` A : ϕ is inferred in ER0 . Thus the following proof is available: ¬A : e ` ¬A : e
π 1∗ , γ ∗ , δ 2∗ ` A : ϕ
π 1∗ , γ ∗ , δ 2∗ , ¬A
:e`
¬E
Therefore, there exists a proof of sequent of ER0 corresponding δ 2 , hγ|π 1 |Aa∪c i4 `a]b .
4. ∨D5 Suppose that π 2 , hγ|π 1 |Aa i4 ` Ba is inferred in F R0 . By the induction hypothesis, π 2∗ , γ ∗ , π 1∗ , ¬A : e ` B : ϕ is inferred in ER0 . Thus, the following proof is available; π 2∗ , γ ∗ , π 1∗ , ¬A : e ` B : ϕ π 2∗ , γ ∗ , π 1∗ ` A ∨ B : i
EM 1
Therefore, there exists a proof of ER0 sequent corresponding to γ ∗ , π 1∗ , π 2∗ ` A ∨ Ba . 5. →I,¬I We prove → I case. We show ¬I case similarly. Suppose that γ, A{k} ` Bb]{k} is inferred in F R0 . By the induction hypothesis, γ ∗ , A : e ` B : ϕ is inferred in ER0 . Thus, we obtain the following proof. γ∗, A : e ` B : ϕ →I γ ∗ ` A→B : i
6. RAA Suppose that we obtain the following proof in F R0 . γ, ¬A{k} `b]{k} γ ` Ab
RAA
By the induction hypothesis, γ ∗ , ¬A : e ` is inferred in ER0 . By the definition of F R0 normalization theorem, A is an atomic proposition. We obtain a proof of γ ∗ , ¬A : r ` by replacing axiom ¬A : e ` ¬A : e by axiom ¬A : r ` ¬A : r in the proof of γ ∗ , ¬A : e `. Thus, the following proof is available. γ ∗ , ¬A : r ` RAA γ∗ ` A : e
7. ∧E1, ∧E2 We prove the ∧E1 case. We show a proof of ∧E2 case similarly. Suppose that we obtain a F R0 proof which is in the left of the following figure. By F R0 normalization theorem, γ ` A ∧ Ba is not contained in I-rule chain. Thus, by the induction hypothesis, we can prove γ ∗ ` A ∧ B : e in ER0 and obtain an ER0 proof that is in the right of the following figure. γ ` A ∧ Ba ∧E1 γ ` Aa
γ `A∧B :e ∧E1 γ`A:e
8. →E, ¬E We prove the→E case. We can show ¬E case similarly. Suppose that γ ` A→Ba , δ ` Ab is inferred in F R0 . By F R0 normalization theorem, γ ` A → Ba is not included in I-rule chain. By the induction hypothesis, γ ∗ ` A→B : e and δ ∗ ` A : φ are inferred in ER0 . Thus, we obtain the following proof. γ ∗ ` A→B : e δ ∗ ` A : φ →E γ ∗ , δ∗ ` B : e
9. ∨E Suppose that we obtain the following formula in F R0 . γ ` A ∨ Ba
δ 1 ` ¬Ab
γ, δ 1 , δ 2
δ 2 ` ¬Bb
`b∪a
∨E1
By the F R0 normalization theorem, γ ` A ∨ Ba is not contained in I-rule chain. Thus, by the induction hypothesis, we obtain the following proof in ER0 . γ∗ ` A ∨ B : e
δ 1∗ ` ¬A : ϕ1
δ 2∗ ` ¬B : ϕ2
If ϕ1 is i, then δ 1∗ ` ¬A : ϕ1 is inferred as follows: δ 1∗ , A : e ` δ 1∗ ` ¬A : i
¬I
If ϕ1 is e or r, then we obtain the following proof: δ 1∗ ` ¬A : ϕ1
A:e`A:e
δ 1∗ , A : e `
¬E
Therefore, in spite of the value of ϕ1 , δ 1∗ , A : e ` is inferred. Similarly, δ 2∗ , B : e ` is also inferred. It follows that we obtain the following proof. γ `A∨B :e
δ 1∗ , A : e ` γ ∗ , δ 1∗ , δ 2∗
δ 2∗ , B : e `
`
∨E4
The next lemma follows from the above discussion. Lemma 14 Theorems of F R are also theorems of ER. Proof of theorem 14: By lemmas 7, 12 and 13, this theorem is obvious. A→(B→(A ∧ B)) is a theorem in ER and not in F R[2]. Thus, the following theorem holds. Theorem 4. ER is properly stronger than F R.
6
Properties of ER
This subsection discuss some properties of ER. In ER, Modus Ponens, the most important inference rule, does not hold. This is because (A→(B →A ∧ B))→(A→(B →A) and A→(B →A ∧ B) are theorems of ER, but A → (B → A) is not so. However, ER is not weaker than R; A→(B→(A ∧ B)) can not be proved in R [2]. On the other hand, it is proved in ER. Since ER is properly stronger than R, non-holding of Modus Ponens is not fatal. Whether other primitive logical rules such as Substitution of Equivalents are not clear. However, Prefixing Rule holds in ER. Theorem 5 (Prefixing Rule). If A→B is a theorem of ER, then so is (C→A)→(C→B). Proof of Theorem 5: Suppose that A→B is a theorem of ER. ` A→B : i is proved in ER and A : e ` B : ϕ is proved in ER. Thus, we obtain the following proof. C→A : e ` C→A : e C : e ` C : e →E C→A : e, C : e ` A : e .. .. C→A : e, C : e ` B : ϕ →I C→A : e ` C→B : i →I ` (C→A)→(C→B) : i
Thus, (C→A)→(C→B) is a theorem of ER. Next, we show an advantage of ER by some examples. We use the following propositions. – – – –
Rain expresses ”It rains”. F ine expresses ”It is fine”. Athletic express ”Athletic meeting is postponed”. AW expresses ”Athletic meeting depends on weather”
Suppose that the following propositions hold. – Rain→Athletic If it rains, then an athletic meeting is postponed. – (Rain→Athletic)→AW That if it rains, then an athletic meeting is postponed implies that the athletic meeting depends on the weather. – Rain ∨ F ine It rains or it is fine.
Suppose that all three formulas hold. In classical logic, we can deduce Athletic→AW from (Rain→Athletic)→AW . By considering the meaning of AW , it turns out that an athletic meeting is dependent on the weather if it rains and an athletic meeting is postponed, however, we cannot say that an athletic meeting is dependent on the weather only since an athletic meeting is postponed. This is because A→B is inferred from B in classic logic. On the other hand, in ER or many relevant logics, since A → B cannot be inferred from B, Athletic→AW cannot be inferred from (Rain→Athletic)→AW . ¬Athletic→F ine is inferred in classic logic. This formula means that if an athletic meeting is postponed, then it is fine. However, in relevant logics, since Disjunctive Syllogism does not hold, ¬Athletic→ F ine cannot be inferred. That is to say, F ine cannot be inferred from Rain ∨ F ine and ¬Rain in relevant logics. However, in ER, since Disjunctive Syllogism holds, ¬Athletic→F ine can be inferred. Thus, fallacies of implication in classic logic are removed in ER and it has sufficient provability. It is thought that ER is more suitable for artificial intelligence than classic logic or relevant logics.
7
Conclusion
This paper presented the new relevant logical system ER. We proved that fallacies of relation and validity are removed in this system and that ER is properly stronger than R. In most systems of relevant logic, some classical logic theorems excluding fallacies is not so because the relevant logical systems become weak so that the theorems including fallacies are removed. However, ER can infer some classical logic theorems excluding fallacies and includes fallacies of relevance and validity. Thus ER is more suitable to the formulization of knowledge reasoning. The future work is to solve the decidability of ER and to give semantics for ER.
References 1. W. Ackermann. Begr¨ undung einer strenger implikation. Journal of symbolic logic, 21:113–128, 1956. 2. Anderson and Belnap. Entailment. The logic of relecance and necessity, volume 1. Princeton University Press, 1975.
3. Ross T. Brady. Relevant implication and the case for a weaker logic. Journal of philosophical logic, 25:151–183, 1996. 4. Jingde Cheng. The fundamental role of entailment in knowledge representation and reasoning. Journal of Computing and Information, 2(1):853–873, 1996. 5. Alozno Church. The weak theory of implication, 1951. 6. S. Hallden. A note concerning the paradoxes of strict implication and lewis’s system s1. Journal of Symbolic Logic, 13:138–139, 1948. 7. Huges, G. E. and Cresswell, M. J. An introduction to Modal Logic. Methuen, London, 1968. 8. C.I. Lewis and C.M. Langford. Symbolic Logic. The Century Co., 1932. 9. Shaw Kwei Moh. The deduction theorems and two new logical systems. In Methodos, volume 2, pages 56–75. 1950. 10. Takeo Sugihara. Non Classical Logic (Japanese). Maki shoten, 1975. 11. A.S. Troelstra and D. van Dalen. Constructivism in mathematics An Introduction, Vol.2.
A{k} ` A{k} .. .. δ, A{k} `b]{k}
¬I =⇒ δ ` ¬Ab .. .. P, O, C φ ` ¬Ad γ ` Ac ¬E γ, φ `c∪d
γ ` Ac P γ{k} ` A{k} .. .. δ, γ{k} `b]{k} .. .. P, O, C φ, γ{k} `d]{k} O γ, φ `c∪d
Fig. 16.
γ ` Ax a
∨Ix γ ` A1 ∨ A2a .. .. P, O, C 1 φ ` A ∨ A2a δ 1 ` ¬A1b φ, δ 1 , δ 2
=⇒ δ 2 ` ¬A2b
`a∪b
Fig. 17.
∨E
δx ` ¬Ax b
γ ` Ax .. a .. P, O, C γ ` A1a ¬E
γ, δx `a∪b
γ, δ 1 , δ 2 `a]b δ 1 , hγ|δ 2 i1 `a]b .. .. 1 2 π , hγ|δ i1 ` Aa∪c
∨D1
δ 2 , hγ|π 1 |Aa∪c i4 `a]b .. .. π 2 , hγ|π 1 |Aa∪c i4 ` Ba∪c γ, π 1 , π 2 ` A ∨ Ba∪c .. .. P, O, C φ ` A ∨ Ba∪c
∨D4
∨D5
λ1 ` ¬Ad
φ, λ1 , λ2
λ2 ` ¬Bd
∨E
`a∪c∪d
⇓ γ, δ 1 , δ 2 `a]b λ1 ` ¬Ad λ1{p}
` ¬A{p}
P
δ 1 , hγ|δ 2 i1 `a]b .. .. π 1 , hγ|δ 2 i1 ` Aa∪c
π 1 , λ1{p} , hγ|δ 2 i1 `(a∪c)]{p} λ2 ` ¬Bd λ2{p}
` ¬B{p}
P
δ 2 , hγ|π 1 , λ1{p} i2 `a]b .. .. 2 1 1 π , hγ|π , λ{p} i2 ` Ba∪c
π 2 , λ2{p} , hγ|π 1 , λ1{p} i2 `(a∪c)]{p} γ, π 1 , λ1{p} , π 2 , λ2{p} `(a∪c)]{p} .. .. P, O, C φ, λ1{p} , λ2{p} `(a∪c)]{p} C φ, (λ1 ∪ λ2 ){p} `(a∪c)]{p} O φ, λ1 , λ2 `a∪c∪d
∨D1
¬E ∨D2
¬E
∨D3
Fig. 18.
γ, ¬B : r, ¬B : e ` C : φ C4 γ, ¬B : r ` C : φ .. .. δ, ¬B : r ` RAA δ`B .. : e .. `A:ϕ
Fig. 19.
γ, ¬B : r, ¬B : e ` C : φ .. .. δ, ¬B : r, ¬B : e ` C3 δ, ¬B : r ` RAA δ`B .. : e .. `A:ϕ