Apr 30, 2015 - LO] 30 Apr 2015. Evaluation Trees for Proposition Algebra. Jan A. Bergstra and Alban Ponse. Section Theory of Computer Science. Informatics ...
Evaluation Trees for Proposition Algebra Jan A. Bergstra and Alban Ponse
arXiv:1504.08321v1 [cs.LO] 30 Apr 2015
Section Theory of Computer Science Informatics Institute, Faculty of Science University of Amsterdam, Netherlands https://staff.fnwi.uva.nl/{j.a.bergstra/,a.ponse/}
Abstract. Proposition algebra is based on Hoare’s conditional connective, which is a ternary connective comparable to if-then-else and used in the setting of propositional logic. Conditional statements are provided with a simple semantics that is based on evaluation trees and that characterizes so-called free valuation congruence: two conditional statements are free valuation congruent if, and only if, they have equal evaluation trees. Free valuation congruence is axiomatized by the four basic equational axioms of proposition algebra that define the conditional connective. Valuation congruences that identify more conditional statements than free valuation congruence are repetition-proof, contractive, memorizing, and static valuation congruence. Each of these valuation congruences is characterized using a transformation on evaluation trees: two conditional statements are C-valuation congruent if, and only if, their C-transformed evaluation trees are equal. These transformations are simple and natural, and only for static valuation congruence a slightly more complex transformation is used. Also, each of these valuation congruences is axiomatized in proposition algebra. A spin-off of our approach is “basic form semantics for proposition algebra”: for each valuation congruence C considered, two conditional statements are C-valuation congruent if, and only if, they have equal C-basic forms, where C-basic forms are obtained by a syntactic transformation of conditional statements, which is a form of normalization. Key words: Conditional composition, evaluation tree, proposition algebra
1
Introduction
In 1985, Hoare’s paper A couple of novelties in the propositional calculus was published [11].1 In this paper the ternary connective ⊳ ⊲ is introduced as the conditional.2 A more common expression for a conditional statement P ⊳Q⊲R is if Q then P else R, but, in order to reason systematically with conditional statements, a notation such as P ⊳ Q ⊲ R is preferable. In a conditional statement P ⊳ Q ⊲ R, first Q is evaluated, and depending on that evaluation result, then either P or R is evaluated (and the other is not) and determines the 1 2
This paper is also available in the 1989 book Essays in Computing Science [12, Chapter Nineteen]. To be distinguished from Hoare’s conditional introduced in his 1985 book on CSP [10] and in his well-known 1987 paper Laws of Programming [9] for expressions P ⊳ b ⊲ Q with P and Q denoting programs and b a Boolean expression.
Table 1. The set CP of equational axioms for free valuation congruence
x⊳T⊲y=x
(CP1)
x⊳F⊲y=y
(CP2)
T⊳x⊲F=x
(CP3)
x ⊳ (y ⊳ z ⊲ u) ⊲ v = (x ⊳ y ⊲ v) ⊳ z ⊲ (x ⊳ u ⊲ v)
(CP4)
evaluation value. This evaluation strategy is a form of short-circuit evaluation.3 In [11], Hoare proves that propositional logic is characterized by eleven equational axioms, some of which employ constants T and F for the truth values true and false. In 2011, we introduced Proposition Algebra in [4] as a general approach to the study of the conditional: we defined several valuation congruences and provided equational axiomatizations of these congruences. The most basic and least identifying valuation congruence is free valuation congruence, which is axiomatized by the axioms in Table 1. These axioms stem from [11] and define the conditional as a primitive connective. We use the name CP (for Conditional Propositions) for this set of axioms. Interpreting a conditional statement as an if-then-else expression, the axioms (CP1) − (CP3) are natural, and axiom (CP4) (distributivity) can be clarified by case analysis: if z evaluates to true and y as well, then x determines the result of evaluation; if z evaluates to true and y evaluates to false, then v determines the result of evaluation, and so on and so forth. Free valuation congruence identifies less than the equivalence defined by Hoare’s axioms in [11]. For example, the atomic proposition a and the conditional statement T ⊳ a ⊲ a are not equivalent with respect to free valuation congruence, although they are equivalent with respect to static valuation congruence, which is the valuation congruence that characterizes propositional logic. A valuation congruence that identifies more than free and less than static valuation congruence is repetition-proof valuation congruence, which has an axiomatization that comprises two more (schematic) axioms, one of which reads x ⊳ a ⊲ (y ⊳ a ⊲ z) = x ⊳ a ⊲ (z ⊳ a ⊲ z). For example, T ⊳ a ⊲ a = T ⊳ a ⊲ (T ⊳ a ⊲ F) = T ⊳ a ⊲ (F ⊳ a ⊲ F), so the left-hand and righthand conditional statements are equivalent with respect to repetition-proof valuation congruence, but they are not equivalent with respect to free valuation congruence. In Section 2 we characterize free valuation congruence with help of evaluation trees: given a conditional statement, its evaluation tree directly represents all its evaluations (in the way a truth table does in the case of propositional logic). Two conditional statements are equivalent with respect to free valuation congruence if their evaluation trees are equal. Evaluation trees are simple binary trees, proposed by Daan Staudt in [14] (that appeared in 2012). In Section 3 we characterize repetition-proof valuation congruence by defining a transformation on evaluation trees that yields repetition-proof evaluation trees: two conditional statements are equivalent with respect to repetition-proof valuation congruence if, and only if, they have equal repetitionproof evaluation trees. Although this transformation on evaluation trees is simple and natural, 3
Short-circuit evaluation denotes the semantics of binary propositional connectives in which the second argument is evaluated only if the first argument does not suffice to determine the value of the expression.
2
our proof of the mentioned characterization —which is phrased as a completeness result— is non-trivial and we could not find a proof that is essentially simpler. Valuation congruences that identify more conditional statements than repetition-proof valuation congruence are contractive, memorizing, and static valuation congruence, and these are all defined and axiomatized in [4]. In Sections 4 − 6, each of these valuation congruences is characterized using a transformation on evaluation trees: two conditional statements are C-valuation congruent if, and only if, their C-transformed evaluation trees are equal. These transformations are simple and natural, and only for static valuation congruence we use a slightly more complex transformation. A spin-off of our approach can be called “basic form semantics for proposition algebra”: for each valuation congruence C that we consider (including the case C = free), two conditional statements are C-valuation congruent if, and only if, they have equal C-basic forms, where Cbasic forms are obtained by a syntactic transformation of conditional statements, which is a form of normalization.
2
Evaluation trees for free valuation congruence
Consider the signature ΣCP (A) = {T, F, ⊳ ⊲ , a | a ∈ A} with constants T and F for the truth values true and false, respectively, and A a countable set of atomic propositions, which will be further called atoms. We write CA for the set of closed terms, or conditional statements, over the signature ΣCP (A). Given a conditional statement P ⊳ Q ⊲ R, we sometimes refer to Q as its central condition. We define the dual P d of P ∈ CA as follows: Td = F,
ad = a
d
d
(for a ∈ A),
(P ⊳ Q ⊲ R) = R ⊳ Qd ⊲ P d .
F = T,
d
Observe that CP is a self-dual axiomatization: when defining xd = x for each variable x, the dual of each axiom is also in CP, and hence CP ⊢ P = Q ⇐⇒ CP ⊢ P d = Qd . A natural view on conditional statements in CA involves short-circuit evaluation, similar to how we consider the evaluation of an “if y then x else z” expression. The following definition is taken from [14]. Definition 2.1. The set TA of evaluation trees over A with leaves in {T, F} is defined inductively by T ∈ TA , F ∈ TA , (X ⊳ a ⊲ Y ) ∈ TA for any X, Y ∈ TA and a ∈ A. The operator ⊳ a ⊲ is called post-conditional composition over a. In the evaluation tree X ⊳ a ⊲ Y , the root is represented by a, the left branch by X and the right branch by Y . The depth d(X) of an evaluation tree X is defined by d(T) = d(F) = 0 and d(Y ⊳ a ⊲ Z) = 1 + max{d(Y ), d(Z)}. 3
We refer to trees in TA as evaluation trees, or trees for short. Post-conditional composition and its notation stem from [2]. Evaluation trees play a crucial role in the main results of [14]. Next to the formal notation for evaluation trees we also use a more pictorial representation. For example, the tree F ⊳ b ⊲ (T ⊳ a ⊲ F) can be represented as follows ( ⊳ yields a left branch, and ⊲ a right branch): b a
F T
F
In order to define our “evaluation tree semantics”, we first define the leaf replacement operator, ‘replacement’ for short, on trees in TA as follows. Let X, X ′ , X ′′ , Y, Z ∈ TA and a ∈ A. The replacement of T with Y and F with Z in X, denoted X[T 7→ Y, F 7→ Z], is defined by T[T 7→ Y, F 7→ Z] = Y, F[T 7→ Y, F 7→ Z] = Z, (X ⊳ a ⊲ X )[T 7→ Y, F 7→ Z] = X ′ [T 7→ Y, F 7→ Z] ⊳ a ⊲ X ′′ [T 7→ Y, F 7→ Z]. ′
′′
We note that the order in which the replacements of leaves of X is listed is irrelevant and we adopt the convention of not listing identities inside the brackets, e.g., X[F 7→ Z] = X[T 7→ T, F 7→ Z]. Furthermore, repeated replacements satisfy the following equation: X[T 7→ Y1 ,F 7→ Z1 ] [T 7→ Y2 , F 7→ Z2 ] = X[T 7→ Y1 [T 7→ Y2 , F 7→ Z2 ], F 7→ Z1 [T 7→ Y2 , F 7→ Z2 ]].
We now have the terminology and notation to define the interpretation of conditional statements in CA as evaluation trees by a function se (abbreviating short-circuit evaluation). Definition 2.2. The short-circuit evaluation function se : CA → TA is defined as follows, where a ∈ A: se(T) = T, se(F) = F, se(a) = T ⊳ a ⊲ F, se(P ⊳ Q ⊲ R) = se(Q)[T 7→ se(P ), F 7→ se(R)]. As we can see from the definition on atoms, evaluation continues in the left branch if an atom evaluates to true and in the right branch if it evaluates to false. We shall often use the constants T and F to denote the result of an evaluation (instead of true and false). For an example see Fig. 1, where the rightmost tree can be derived as follows: se(a ⊳ (F ⊳ b ⊲ T) ⊲ F) = se(F ⊳ b ⊲ T)[T 7→ se(a), F 7→ se(F)] = (F ⊳ b ⊲ T)[T 7→ se(a)] = F ⊳ b ⊲ (T ⊳ a ⊲ F).
4
b
b
a T
c F
T
a
F F
T
The evalution tree se(a ⊳ b ⊲ c)
F
The evalution tree se(a ⊳ (F ⊳ b ⊲ T) ⊲ F)
Fig. 1. Two examples of evaluation trees
Definition 2.3. Let P ∈ CA . An evaluation of P is a pair (σ, B) where σ ∈ (A{T, F})∗ and B ∈ {T, F}, such that if se(P ) ∈ {T, F}, then σ = ǫ (the empty string) and B = se(P ), and otherwise, σ = a 1 B1 a 2 B2 · · · a n Bn , where a1 a2 · · · an B is a complete path in se(P ) and – for i < n, if ai+1 is a left child of ai then Bi = T, and otherwise Bi = F, – if B is a left child of an then Bn = T, and otherwise Bn = F. We refer to σ as the evaluation path and to B as the evaluation result. So, an evaluation of a conditional statement P can be characterized by a complete path in se(P ) (from root to leaf), including the evaluations of its successive atoms. As an example, consider F ⊳ a ⊲ (F ⊳ a ⊲ T) and its se-image a a
F F
T
In this evaluation tree, the evaluation (aFaT, F) expresses that the first occurrence of a is evaluated to F, the second occurrence of a is evaluated to T, and the final evaluation value is F. In this way, each evaluation tree in turn gives rise to a unique conditional statement. Definition 2.4. Basic forms over A are defined by the following grammar t ::= T | F | t ⊳ a ⊲ t
for a ∈ A.
We write BFA for the set of basic forms over A. The depth d(P ) of P ∈ BFA is defined by d(T) = d(F) = 0 and d(Q ⊳ a ⊲ R) = 1 + max{d(Q), d(R)}. The basic form associated with the last example is F ⊳ a ⊲ (F ⊳ a ⊲ T), and its se-image is F ⊳ a ⊲ (F ⊳ a ⊲ T). Lemma 2.5. For all basic forms P and Q, se(P ) = se(Q) implies P = Q. Proof. By structural induction on P . The base cases P ∈ {T, F} are trivial. If P = P1 ⊳ a ⊲ P2 , then Q 6∈ {T, F} and Q 6= Q1 ⊳ b ⊲ Q2 with b 6= a, so Q = Q1 ⊳ a ⊲ Q2 and se(Pi ) = se(Qi ). By induction we find Pi = Qi , and hence P = Q. ⊓ ⊔ 5
Lemma 2.6. For each P ∈ CA there exists Q ∈ BFA such that CP ⊢ P = Q. Proof. First we establish an auxiliary result: if P, Q, R are basic forms, then there is a basic form S such that CP ⊢ P ⊳ Q ⊲ R = S. This follows by structural induction on Q. The lemma’s statement follows by structural induction on P . The base cases P ∈ {T, F, a | a ∈ A} are trivial, and if P = P1 ⊳ P2 ⊲ P3 there exist by induction basic forms Qi such that CP ⊢ Pi = Qi , hence CP ⊢ P1 ⊳ P2 ⊲ P3 = Q1 ⊳ Q2 ⊲ Q3 . Now apply the auxiliary result. ⊓ ⊔ Definition 2.7. Free valuation congruence, notation =se , is defined on CA as follows: P =se Q ⇐⇒ se(P ) = se(Q). Lemma 2.8. Free valuation congruence is a congruence relation. Proof. Let P, Q, R ∈ CA and assume P =se P ′ , thus se(P ) = se(P ′ ). Then se(P ⊳ Q ⊲ R) = se(Q)[T 7→ se(P ), F 7→ se(R)] = se(Q)[T 7→ se(P ′ ), F 7→ se(R)] = se(P ′ ⊳ Q ⊲ R), and thus P ⊳ Q ⊲ R =se P ′ ⊳ Q ⊲ R. The two remaining cases can be proved in a similar way. ⊓ ⊔ Theorem 2.9 (Completeness of CP). For all P, Q ∈ CA , CP ⊢ P = Q ⇐⇒ P =se Q. Proof. We first prove ⇒. By Lemma 2.8, =se is a congruence relation and it easily follows that all CP-axioms are sound. For example, soundness of axiom (CP4) follows from se(P ⊳ (Q ⊳ R ⊲ S) ⊲ U ) = se(Q ⊳ R ⊲ S)[T 7→ se(P ), F 7→ se(U )] = se(R)[T 7→ se(Q), F 7→ se(S)] [T 7→ se(P ), F 7→ se(U )] = se(R)[T 7→ se(Q)[T 7→ se(P ), F 7→ se(U )], F 7→ se(S)[T 7→ se(P ), F 7→ se(U )]] = se(R)[T 7→ se(P ⊳ Q ⊲ U ), F 7→ se(P ⊳ S ⊲ U )] = se((P ⊳ Q ⊲ U ) ⊳ R ⊲ (P ⊳ S ⊲ U )). In order to prove ⇐, let P =se Q. According to Lemma 2.6 there exist basic forms P ′ and Q such that CP ⊢ P = P ′ and CP ⊢ Q = Q′ , so CP ⊢ P ′ = Q′ . By soundness (⇒) we find P ′ =se Q′ , so by Lemma 2.5, P ′ = Q′ . Hence, CP ⊢ P = P ′ = Q′ = Q. ⊓ ⊔ ′
A consequence of the above results is that for each P ∈ CA there is a unique basic form P ′ with CP ⊢ P = P ′ , and that for each basic form, its se-image has exactly the same syntactic structure (replacing ⊳ by ⊳ , and ⊲ by ⊲ ). In the remainder of this section, we make this precise. Definition 2.10. The basic form function bf : CA → BFA is defined as follows, where a ∈ A: bf (T) = T, bf (F) = F, bf (a) = T ⊳ a ⊲ F, bf (P ⊳ Q ⊲ R) = bf (Q)[T 7→ bf (P ), F 7→ bf (R)]. 6
Given Q, R ∈ BFA , the auxiliary function [T 7→ Q, F 7→ R] : BFA → BFA for which post-fix notation P [T 7→ Q, F 7→ R] is adopted, is defined as follows: T[T 7→ Q, F 7→ R] = Q, F[T 7→ Q, F 7→ R] = R, (P1 ⊳ a ⊲ P2 )[T 7→ Q, F 7→ R] = P1 [T 7→ Q, F 7→ R] ⊳ a ⊲ P2 [T 7→ Q, F 7→ R]. (The notational overloading with the leaf replacement functions on valuation trees is harmless). So, for given Q, R ∈ BFA , the auxiliary function [T 7→ Q, F 7→ R] applied to P ∈ BFA (thus, P [T 7→ Q, F 7→ R]) replaces all T-occurrences in P by Q, and all F-occurrences in P by R. Lemma 2.11. For all P ∈ CA , bf (P ) is a basic form. Proof. By structural induction. The base cases are trivial. For the inductive case, we find bf (P ⊳ Q ⊲ R) = bf (Q)[T 7→ bf (P ), F 7→ bf (R)], so by induction, bf (P ), bf (Q), and bf (R) are basic forms. Furthermore, replacing all T-occurrences and F-occurrences in bf (Q) by basic forms bf (P ) and bf (R), respectively, yields a basic form. ⊓ ⊔ Lemma 2.12. If P is a basic form, then bf (P ) = P . ⊓ ⊔
Proof. By structural induction on P . Definition 2.13. The binary relation =bf on CA is defined as follows: P =bf Q ⇐⇒ bf (P ) = bf (Q). The following lemma is a rephrasing of Lemma 2.8 for the function bf : Lemma 2.14. The relation =bf is a congruence relation.
Proof. Let P, Q, R ∈ CA and assume P =bf P ′ , thus bf (P ) = bf (P ′ ). Then bf (P ⊳ Q ⊲ R) = bf (Q)[T 7→ bf (P ), F 7→ bf (R)] = bf (Q)[T 7→ bf (P ′ ), F 7→ bf (R)] = bf (P ′ ⊳ Q ⊲ R), and thus P ⊳ Q ⊲ R =bf P ′ ⊳ Q ⊲ R. The two remaining cases can be proved in a similar way. ⊓ ⊔ Before proving that CP is an axiomatization of the relation =bf , we show that each instance of the axiom (CP4) satisfies =bf . Lemma 2.15. For all P, P1 , P2 , Q1 , Q2 ∈ CA , bf (Q1 ⊳ (P1 ⊳ P ⊲ P2 ) ⊲ Q2 ) = bf ((Q1 ⊳ P1 ⊲ Q2 ) ⊳ P ⊲ (Q1 ⊳ P2 ⊲ Q2 )). Proof. By definition, the lemma’s statement is equivalent with bf (P )[T 7→ bf (P1 ), F 7→ bf (P2 )] [T 7→ bf (Q1 ), F 7→ bf (Q2 )]
= bf (P )[T 7→ bf (Q1 ⊳ P1 ⊲ Q2 ), F 7→ bf (Q1 ⊳ P2 ⊲ Q2 )].
(1)
By Lemma 2.11, bf (P ), bf (Pi ),and bf (Qi ) are basic forms. We prove (1) by structural induction on the form that bf (P ) can have. If bf (P ) = T, then T[T 7→ bf (P1 ), F 7→ bf (P2 )] [T 7→ bf (Q1 ), F 7→ bf (Q2 )] = bf (P1 )[T 7→ bf (Q1 ), F 7→ bf (Q2 )] 7
and T[T 7→ bf (Q1 ⊳ P1 ⊲ Q2 ), F 7→ bf (Q1 ⊳ P2 ⊲ Q2 )] = bf (Q1 ⊳ P1 ⊲ Q2 ) = bf (P1 )[T 7→ bf (Q1 ), F 7→ bf (Q2 )]. If bf (P ) = F, then (1) follows in a similar way. The inductive case bf (P ) = R1 ⊳ a ⊲ R2 is trivial (by definition of the last defining clause of the auxiliary functions [T 7→ Q, F 7→ R], see Definition 2.10). ⊓ ⊔ Theorem 2.16. For all P, Q ∈ CA , CP ⊢ P = Q ⇐⇒ P =bf Q. Proof. We first prove ⇒. By Lemma 2.14, =bf is a congruence relation and it easily follows that arbitrary instances of the CP-axioms (CP1) − (CP3) satisfy =bf . By Lemma 2.15 it follows that arbitrary instances of axiom (CP4) also satisfy =bf . In order to prove ⇐, assume P =bf Q. According to Lemma 2.6, there exist basic forms P ′ and Q′ such that CP ⊢ P = P ′ and CP ⊢ Q = Q′ , so CP ⊢ P ′ = Q′ . By ⇒ it follows that P ′ =bf Q′ , which implies by Lemma 2.12 that P ′ = Q′ . Hence, CP ⊢ P = P ′ = Q′ = Q. ⊓ ⊔ Corollary 2.17. For all P ∈ CA , CP ⊢ P = bf (P ). Proof. By Lemma 2.11 and Lemma 2.12, bf (P ) = bf (bf (P )), thus P =bf bf (P ). By Theorem 2.16, CP ⊢ P = bf (P ). ⊓ ⊔ Corollary 2.18. Free valuation congruence =se coincides with the relation =bf .
3
Evaluation trees for repetition-proof valuation congruence
In [4] we introduced various axiomatic extensions of the axiom set CP, that is, extensions defined by adding axioms to CP, that identify more conditional statements than CP does. Repetitionproof CP is the extension of CP with these axiom schemes, where a ranges over A: (x ⊳ a ⊲ y) ⊳ a ⊲ z = (x ⊳ a ⊲ x) ⊳ a ⊲ z, x ⊳ a ⊲ (y ⊳ a ⊲ z) = x ⊳ a ⊲ (z ⊳ a ⊲ z).
(CPrp1) (CPrp2)
We write CPrp (A) for this extension. These axiom schemes characterize that for each atom a, a consecutive evaluation of a yields the same result, so in both cases the conditional statement at the y-position will not be evaluated and can be replaced by any other. Note that (CPrp1) and (CPrp2) are each others dual. We define a proper subset of basic forms with the property that each propositional statement can be proved equal to such a basic form. Definition 3.1. Rp-basic forms are inductively defined: – T and F are rp-basic forms, and – P1 ⊳ a ⊲ P2 is an rp-basic form if P1 and P2 are rp-basic forms, and if Pi is not equal to T or F, then either the central condition in Pi is different from a, or Pi is of the form Qi ⊳ a ⊲ Qi . It will turn out useful to define a function that transforms conditional statements into rpbasic forms, and that is comparable to the function bf (see Definition 2.10). 8
Definition 3.2. The rp-basic form function rpbf : CA → CA is defined by rpbf (P ) = rp(bf (P )). The auxiliary function rp : BFA → BFA is defined as follows: rp(T) = T rp(F) = F, rp(P ⊳ a ⊲ Q) = rp(fa (P )) ⊳ a ⊲ rp(ga (Q)). For a ∈ A, the auxiliary functions fa : BFA → BFA and ga : BFA → BFA are defined by fa (T) = T, fa (F) = F, ( fa (P ) ⊳ a ⊲ fa (P ) fa (P ⊳ b ⊲ Q) = P ⊳b⊲Q
if b = a, otherwise,
ga (T) = T, ga (F) = F, ( ga (Q) ⊳ a ⊲ ga (Q) ga (P ⊳ b ⊲ Q) = P ⊳b⊲Q
if b = a, otherwise.
Thus, rpbf maps a conditional statement P to bf (P ) and then transforms bf (P ) according to the auxiliary functions rp, fa , and ga . Lemma 3.3. For all a ∈ A and P ∈ BFA , ga (fa (P )) = fa (fa (P )) = fa (P ) and fa (ga (P )) = ga (ga (P )) = ga (P ). Proof. By structural induction on P . The base cases P ∈ {T, F} are trivial. For the inductive case P = Q ⊳ b ⊲ R we have to distinguish the cases b = a and b 6= a. If b = a, then ga (fa (Q ⊳ a ⊲ R)) = ga (fa (Q)) ⊳ a ⊲ ga (fa (Q)) = fa (Q) ⊳ a ⊲ fa (Q) = fa (Q ⊳ a ⊲ R),
by IH
and fa (fa (Q ⊳ a ⊲ R)) = fa (Q ⊳ a ⊲ R) follows in a similar way. If b 6= a, then fa (P ) = ga (P ) = P , and hence ga (fa (P )) = fa (fa (P )) = fa (P ). The second pair of equalities can be proved in a similar way.
⊓ ⊔
In order to prove that for all P ∈ CA , rpbf (P ) is an rp-basic form, we use the following auxiliary lemma. Lemma 3.4. For all a ∈ A and P ∈ BFA , d(P ) ≥ d(fa (P )) and d(P ) ≥ d(ga (P )). 9
Proof. Fix some a ∈ A. We prove these inequalities by structural induction on P . The base cases P ∈ {T, F} are trivial. For the inductive case P = Q ⊳ b ⊲ R we have to distinguish the cases b = a and b 6= a. If b = a, then d(Q ⊳ a ⊲ R) = 1 + max{d(Q), d(R)} ≥ 1 + d(Q) ≥ 1 + d(fa (Q)) = d(fa (Q) ⊳ a ⊲ fa (Q))
by IH
= d(fa (Q ⊳ a ⊲ R)), and d(Q ⊳ a ⊲ R) ≥ d(ga (Q ⊳ a ⊲ R)) follows in a similar way. If b 6= a, then fa (P ) = ga (P ) = P , and hence d(P ) ≥ d(fa (P )) and d(P ) ≥ d(ga (P )).
⊓ ⊔
Lemma 3.5. For all P ∈ CA , rpbf (P ) is an rp-basic form. Proof. We first prove an auxiliary result: For all P ∈ BFA , rp(P ) is an rp-basic form.
(2)
This follows by induction on the depth d(P ) of P . If d(P ) = 0, then P ∈ {T, F}, and hence rp(P ) = P is an rp-basic form. For the inductive case d(P ) = n + 1 it must be the case that P = Q ⊳ a ⊲ R. We find rp(Q ⊳ a ⊲ R) = rp(fa (Q)) ⊳ a ⊲ rp(ga (R)), which is an rp-basic form because – by Lemma 3.4, fa (Q) and ga (R) are basic forms with depth smaller than or equal to n, so by the induction hypothesis, rp(fa (Q)) and rp(ga (R)) are rp-basic forms, – rp(fa (Q)) and rp(ga (R)) both satisfy the following property: if the central condition (if present) is a, then the outer arguments are equal. We show this first for rp(fa (Q)) by a case distinction on the form of Q: 1. If Q ∈ {T, F}, then rp(fa (Q)) = Q, so there is nothing to prove. 2. If Q = Q1 ⊳ a ⊲ Q2 , then fa (Q) = fa (Q1 ) ⊳ a ⊲ fa (Q1 ) and thus by Lemma 3.3, rp(fa (Q)) = rp(fa (Q1 )) ⊳ a ⊲ rp(fa (Q1 )). 3. If Q = Q1 ⊳ b ⊲ Q2 with b 6= a, then fa (Q) = Q1 ⊳ b ⊲ Q2 , so rp(fa (Q)) = rp(fb (Q1 )) ⊳ b ⊲ rp(gb (Q2 )) and there is nothing to prove. The fact that rp(ga (R)) satisfies this property follows in a similar way. This finishes the proof of (2). The lemma’s statement now follows by structural induction: the base cases (comprising a single atom a) are again trivial, and for the inductive case, rpbf (P ⊳ Q ⊲ R) = rp(bf (P ⊳ Q ⊲ R)) = rp(S) for some basic form S by Lemma 2.11, and by (2), rp(S) is an rp-basic form. The following, rather technical lemma is used repeatedly. 10
⊓ ⊔
Lemma 3.6. If Q ⊳ a ⊲ R is an rp-basic form, then Q = rp(Q) = rp(fa (Q)) and R = rp(R) = rp(ga (R)). Proof. We first prove an auxiliary result: If Q ⊳ a ⊲ R is an rp-basic form, then fa (Q) = ga (Q) and fa (R) = ga (R).
(3)
We prove both equalities by simultaneous induction on the structure of Q and R. The base case, thus Q, R ∈ {T, F}, is trivial. If Q = Q1 ⊳ a ⊲ Q1 and R = R1 ⊳ a ⊲ R1 , then Q and R are rp-basic forms with central condition a, so fa (Q) = fa (Q1 ) ⊳ a ⊲ fa (Q1 ) = ga (Q1 ) ⊳ a ⊲ ga (Q1 )
by IH
= ga (Q), and the equality for R follows in a similar way. If Q = Q1 ⊳ a ⊲ Q1 and R 6= R1 ⊳ a ⊲ R1 , then fa (R) = ga (R) = R, and the result follows as above. All remaining cases follow in a similar way, which finishes the proof of (3). We now prove the lemma’s statement by simultaneous induction on the structure of Q and R. The base case, thus Q, R ∈ {T, F}, is again trivial. If Q = Q1 ⊳ a ⊲ Q1 and R = R1 ⊳ a ⊲ R1 , then by (3), rp(Q) = rp(fa (Q1 )) ⊳ a ⊲ rp(fa (Q1 )), rp(R) = rp(ga (R1 )) ⊳ a ⊲ rp(ga (R1 )), and by induction Q1 = rp(Q1 ) = rp(fa (Q1 )) and R1 = rp(R1 ) = rp(ga (R1 )). Hence, rp(Q) = Q1 ⊳ a ⊲ Q1 , and rp(fa (Q)) = rp(fa (fa (Q1 ))) ⊳ a ⊲ rp(ga (fa (Q1 ))) = rp(fa (Q1 )) ⊳ a ⊲ rp(fa (Q1 ))
by Lemma 3.3
= Q1 ⊳ a ⊲ Q1 , and the equalities for R follow in a similar way. If Q = Q1 ⊳ a ⊲ Q1 and R 6= R1 ⊳ a ⊲ R1 , the lemma’s equalities follow in a similar way, although a bit simpler because ga (R) = fa (R) = R. For all remaining cases, the lemma’s equalities follow in a similar way. ⊓ ⊔ With Lemma 3.6 we can easily prove the following result. Proposition 3.7 (rpbf is a normalization function). For each P ∈ CA , rpbf (P ) is an rp-basic form, and for each rp-basic form P , rpbf (P ) = P . Proof. The first statement is Lemma 3.5. For the second statement, it suffices by Lemma 2.12 to prove that for each rp-basic form P , rp(P ) = P . This follows by case distinction on P . The cases P ∈ {T, F} follow immediately, and otherwise P = P1 ⊳ a ⊲ P2 , and thus rp(P ) = rp(fa (P1 )) ⊳ a ⊲ rp(ga (P2 )). By Lemma 3.6, rp(fa (P1 )) = P1 and rp(ga (P2 )) = P2 , hence rp(P ) = P . ⊓ ⊔ Lemma 3.8. For all P ∈ BFA , CPrp (A) ⊢ P = rp(P ). 11
Proof. We apply structural induction on P . The base cases P ∈ {T, F} are trivial. Assume P = P1 ⊳ a ⊲ P2 . By induction CPrp (A) ⊢ Pi = rp(Pi ). We proceed by a case distinction on the form that P1 and P2 can have: 1. If Pi ∈ {T, F, Qi ⊳ bi ⊲ Q′i } with bi 6= a, then rp(P ) = rp(P1 ) ⊳ a ⊲ rp(P2 ), and hence CPrp (A) ⊢ P = rp(P ). 2. If P1 = R1 ⊳ a ⊲ R2 and P2 = S1 ⊳ a ⊲ S2 , then by auxiliary result (2) in the proof of Lemma 3.5, rp(R1 ) and rp(S2 ) are rp-basic forms. We derive CPrp (A) ⊢ P = (R1 ⊳ a ⊲ R2 ) ⊳ a ⊲ (S1 ⊳ a ⊲ S2 ) = (R1 ⊳ a ⊲ R1 ) ⊳ a ⊲ (S2 ⊳ a ⊲ S2 ) = (rp(R1 ) ⊳ a ⊲ rp(R1 )) ⊳ a ⊲ (rp(S2 ) ⊳ a ⊲ rp(S2 ))
by IH
= (rp(fa (R1 )) ⊳ a ⊲ rp(fa (R1 ))) ⊳ a ⊲ (rp(ga (S2 )) ⊳ a ⊲ rp(ga (S2 )))
by Lemma 3.6
= rp(fa (R1 ⊳ a ⊲ R2 )) ⊳ a ⊲ rp(ga (S1 ⊳ a ⊲ S2 )) = rp((R1 ⊳ a ⊲ R2 ) ⊳ a ⊲ (S1 ⊳ a ⊲ S2 )) = rp(P ). 3. If P1 = R1 ⊳ a ⊲ R2 and P2 ∈ {T, F, Q′ ⊳ b ⊲ Q′′ } with b 6= a, we can proceed as in the previous case, but simplifying the right-hand arguments of the central condition a. 4. If P2 ∈ {T, F, Q′ ⊳ b ⊲ Q′′ } with b 6= a and P2 = S1 ⊳ a ⊲ S2 , we can proceed as in case 2, but now simplifying the left-hand arguments of the central condition a. ⊓ ⊔ Theorem 3.9. For all P ∈ CA , CPrp (A) ⊢ P = rpbf (P ). Proof. By Corollary 2.17, CPrp (A) ⊢ P = bf (P ), and by Lemma 3.8, CPrp (A) ⊢ bf (P ) = rpbf (bf (P )). By Lemma 2.12, bf (bf (P )) = bf (P ), and thus rpbf (bf (P )) = rpbf (P ). ⊓ ⊔ Definition 3.10. The binary relation =rpbf on CA is defined as follows: P =rpbf Q ⇐⇒ rpbf (P ) = rpbf (Q). Theorem 3.11. For all P, Q ∈ CA , CPrp (A) ⊢ P = Q ⇐⇒ P =rpbf Q. Proof. Assume CPrp (A) ⊢ P = Q. Then, by Theorem 3.9, CPrp (A) ⊢ rpbf (P ) = rpbf (Q). In [4] the following two statements are proved (Theorem 6.3 and an auxiliary result in its proof), where =rp is a binary relation on CA : 1. For all P, Q ∈ CA , CPrp (A) ⊢ P = Q ⇐⇒ P =rp Q. 2. For all rp-basic forms P and Q, P =rp Q ⇒ P = Q. By Lemma 3.5 these statements imply rpbf (P ) = rpbf (Q), that is, P =rpbf Q. Assume P =rpbf Q. By Lemma 2.12, bf (rpbf (P )) = bf (rpbf (Q)). By Theorem 2.16, CP ⊢ rpbf (P ) = rpbf (Q). By Theorem 3.9, CPrp (A) ⊢ P = Q. ⊓ ⊔ So, the relation =rpbf is a congruence on CA that is axiomatized by CPrp (A). With this observation in mind, we define a transformation on evaluation trees that mimics the function rpbf , and prove that equality of two such transformed trees characterizes the congruence that is axiomatized by CPrp (A). 12
Definition 3.12. The unary repetition-proof evaluation function rpse : CA → TA yields repetition-proof evaluation trees and is defined by rpse(P ) = rpe (se(P )). The auxiliary function rpe : TA → TA is defined as follows (a ∈ A): rpe (T) = T, rpe (F) = F, rpe (X ⊳ a ⊲ Y ) = rpe (Fa (X)) ⊳ a ⊲ rpe (Ga (Y )). For a ∈ A, the auxiliary functions Fa : TA → TA and Ga : TA → TA are defined by Fa (T) = T, Fa (F) = F, ( Fa (X) ⊳ a ⊲ Fa (X) Fa (X ⊳ b ⊲ Y ) = X⊳b⊲Y
if b = a, otherwise,
Ga (T) = T, Ga (F) = F, ( Ga (Y ) ⊳ a ⊲ Ga (Y ) Ga (X ⊳ b ⊲ Y ) = X⊳b⊲Y
if b = a, otherwise.
As a simple example we depict se((a ⊳ a ⊲ F) ⊳ a ⊲ F) and the repetition-proof evaluation tree rpse((a ⊳ a ⊲ F) ⊳ a ⊲ F): a a a T
a a
F
F
a
F F
T
a T
T
T
The similarities between rpse and the function rpbf can be exploited: Lemma 3.13. For all a ∈ A and X ∈ TA , Ga (Fa (X)) = Fa (Fa (X)) = Fa (X) and Fa (Ga (X)) = Ga (Ga (X)) = Ga (X). Proof. By structural induction on P (cf. the proof of Lemma 3.3). We use the following lemma’s in the proof of our next completeness result. 13
⊓ ⊔
Lemma 3.14. For all P ∈ BFA and for all a ∈ A, rpe (Fa (se(P ))) = se(rp(fa (P ))) and rpe (Ga (se(P ))) = se(rp(ga (P ))). Proof. We prove the first equality by structural induction on P . The base cases P ∈ {T, F} are trivial. For the inductive case P = Q ⊳ a ⊲ R, let b ∈ A. We have to distinguish the cases b = a and b 6= a. If b = a, then rpe (Fa (se(Q ⊳ a ⊲ R))) = rpe (Fa (se(Q) ⊳ a ⊲ se(R))) = rpe (Fa (se(Q)) ⊳ a ⊲ Fa (se(Q))) = rpe (Fa (Fa (se(Q)))) ⊳ a ⊲ rpe (Ga (Fa (se(Q)))) = rpe (Fa (se(Q))) ⊳ a ⊲ rpe (Fa (se(Q)))
by Lemma 3.13
= se(rp(fa (Q))) ⊳ a ⊲ se(rp(fa (Q))) = se(rp(fa (Q)) ⊳ a ⊲ rp(fa (Q)))
by IH
= se(rp(fa (fa (Q))) ⊳ a ⊲ rp(ga (fa (Q)))) = se(rp(fa (Q ⊳ a ⊲ fa (Q))))
by Lemma 3.3
= se(rp(fa (Q ⊳ a ⊲ R))). If b 6= a, then rpe (Fb (se(Q ⊳ a ⊲ R))) = rpe (Fb (se(Q) ⊳ a ⊲ se(R))) = rpe (se(Q) ⊳ a ⊲ se(R)) = rpe (Fa (se(Q))) ⊳ a ⊲ rpe (Ga (se(R))) = se(rp(fa (Q))) ⊳ a ⊲ se(rp(ga (R))) = se(rp(fa (Q)) ⊳ a ⊲ rp(ga (R)))
by IH
= se(rp(Q ⊳ a ⊲ R)) = se(rp(fb (Q ⊳ a ⊲ R))). ⊓ ⊔
The second equality can be proved in a similar way. Lemma 3.15. For all P ∈ BFA , rpe (se(P )) = se(rp(P )).
Proof. By a case distinction on P . The cases P ∈ {T, F} follow immediately, and otherwise P = Q ⊳ a ⊲ R, and thus rpe (se(Q ⊳ a ⊲ R)) = rpe (se(Q) ⊳ a ⊲ se(R)) = rpe (Fa (se(Q))) ⊳ a ⊲ rpe (Ga (se(R))) = se(rp(fa (Q))) ⊳ a ⊲ se(rp(ga (R))) = se(rp(fa (Q)) ⊳ a ⊲ rp(ga (R)))
by Lemma 3.14
= se(rp(Q ⊳ a ⊲ R)). ⊓ ⊔ Definition 3.16. Repetition-proof valuation congruence, notation =rpse , is defined on CA as follows: P =rpse Q ⇐⇒ rpse(P ) = rpse(Q). 14
The following characterization result immediately implies that =rpse is a congruence relation on CA (and hence justifies calling it a congruence). Proposition 3.17. For all P, Q ∈ CA , P =rpse Q ⇐⇒ P =rpbf Q. Proof. In order to prove ⇒, assume rpse(P ) = rpse(Q), thus rpe (se(P )) = rpe (se(Q)). By Corollary 2.17 and Theorem 2.9, rpe (se(bf (P ))) = rpe (se(bf (Q))), so by Lemma 3.15, se(rp(bf (P ))) = se(rp(bf (Q))). By Lemma 2.5, it follows that rp(bf (P )) = rp(bf (Q)), that is, P =rpbf Q. In order to prove ⇐, assume P =rpbf Q, thus rp(bf (P )) = rp(bf (Q)). Then se(rp(bf (P ))) = se(rp(bf (Q))) and by Lemma 3.15, rpe (se(bf (P ))) = rpe (se(bf (Q))). By Corollary 2.17 and Theorem 2.9, se(bf (P )) = se(P ) and se(bf (Q)) = se(Q), and thus rpe (se(P )) = rpe (se(Q)), that is, P =rpse Q. ⊓ ⊔ We end this section with a completeness result for repetition-proof valuation congruence. Theorem 3.18 (Completeness of CPrp (A)). For all P, Q ∈ CA , CPrp (A) ⊢ P = Q ⇐⇒ P =rpse Q. Proof. Combine Theorem 3.11 and Proposition 3.17.
4
⊓ ⊔
Evaluation trees for contractive valuation congruence
In [4] we introduced CPcr (A), contractive CP, as the extension of CP with the following axiom schemes, where a ranges over A: (x ⊳ a ⊲ y) ⊳ a ⊲ z = x ⊳ a ⊲ z, x ⊳ a ⊲ (y ⊳ a ⊲ z) = x ⊳ a ⊲ z.
(CPcr1) (CPcr2)
These schemes prescribe contraction for each atom a for respectively the true-case and the falsecase (and are each others dual). It easily follows that the axiom schemes (CPrp1) and (CPrp2) are derivable from CPcr (A), so CPcr (A) is also an axiomatic extension of CPrp (A). Again, we define a proper subset of basic forms with the property that each propositional statement can be proved equal to such a basic form. Definition 4.1. Cr-basic forms are inductively defined: – T and F are cr-basic forms, and – P1 ⊳ a ⊲ P2 is a cr-basic form if P1 and P2 are cr-basic forms, and if Pi is not equal to T or F, the central condition in Pi is different from a. It will turn out useful to define a function that transforms conditional statements into cr-basic forms, and that is comparable to the function bf (see Definition 2.10). 15
Definition 4.2. The cr-basic form function crbf : CA → CA is defined by crbf (P ) = cr(bf (P )). The auxiliary function cr : BFA → BFA is defined as follows: cr(T) = T cr(F) = F, cr(P ⊳ a ⊲ Q) = cr(ha (P )) ⊳ a ⊲ cr(ja (Q)). For a ∈ A, the auxiliary functions ha : BFA → BFA and ja : BFA → BFA are defined by ha (T) = T, ha (F) = F, ( ha (P ) ha (P ⊳ b ⊲ Q) = P ⊳b⊲Q
if b = a, otherwise,
ja (T) = T, ja (F) = F, ( ja (Q) ja (P ⊳ b ⊲ Q) = P ⊳b⊲Q
if b = a, otherwise.
Thus, crbf maps a conditional statement P to bf (P ) and then transforms bf (P ) according to the auxiliary functions cr, ha , and ja . Lemma 4.3. For all a ∈ A and P ∈ BFA , d(P ) ≥ d(ha (P )) and d(P ) ≥ d(ja (P )). Proof. Fix some a ∈ A. We prove these inequalities by structural induction on P . The base cases P ∈ {T, F} are trivial. For the inductive case P = Q ⊳ b ⊲ R we have to distinguish the cases b = a and b 6= a. If b = a, then d(Q ⊳ a ⊲ R) = 1 + max{d(Q), d(R)} ≥ 1 + d(Q) ≥ 1 + d(ha (Q)) = 1 + d(ha (Q ⊳ a ⊲ R)),
by IH
and d(Q ⊳ a ⊲ R) ≥ d(ja (Q ⊳ a ⊲ R)) follows in a similar way. If b 6= a, then ha (P ) = ja (P ) = P , and hence d(P ) ≥ d(ha (P )) and d(P ) ≥ d(ja (P )).
⊓ ⊔
Lemma 4.4. For all P ∈ CA , crbf (P ) is a cr-basic form. Proof. We first prove an auxiliary result: For all P ∈ BFA , cr(P ) is a cr-basic form.
(4)
This follows by induction on the depth d(P ) of P . If d(P ) = 0, then P ∈ {T, F}, and hence cr(P ) = P is a cr-basic form. For the inductive case d(P ) = n + 1 it must be the case that P = Q ⊳ a ⊲ R. We find cr(Q ⊳ a ⊲ R) = cr(ha (Q)) ⊳ a ⊲ cr(ja (R)), which is a cr-basic form because 16
– by Lemma 4.3, ha (Q) and ja (R) are basic forms with depth smaller than or equal to n, so by the induction hypothesis, cr(ha (Q)) and cr(ja (R)) are cr-basic forms, – by definition of the auxiliary functions ha and ja , the central condition of ha (Q) and ja (R) is not equal to a, hence cr(ha (Q)) ⊳ a ⊲ cr(ja (R)) is a cr-basic form. This completes the proof of (4). The lemma’s statement now follows by structural induction: the base cases (comprising a single atom a) are again trivial, and for the inductive case, crbf (P ⊳ Q ⊲ R) = cr(bf (P ⊳ Q ⊲ R)) = cr(S) ⊓ ⊔
for some basic form S by Lemma 2.11, and by (4), cr(S) is a cr-basic form. The following, somewhat technical lemma is used repeatedly.
Lemma 4.5. If Q ⊳ a ⊲ R is a cr-basic form, then Q = cr(Q) = cr(ha (Q)) and R = cr(R) = cr(ja (R)). Proof. By simultaneous induction on the structure of Q and R. The base case, thus Q, R ∈ {T, F}, is again trivial. If Q = Q1 ⊳ b ⊲ Q2 and R = R1 ⊳ c ⊲ R2 , then b 6= a 6= c and thus ha (Q) = Q and ja (R) = R. Moreover, Q1 has no central condition b, hence hb (Q1 ) = Q1 and jb (Q2 ) = Q2 , and thus cr(Q) = cr(hb (Q1 )) ⊳ b ⊲ cr(jb (Q2 )) = cr(Q1 ) ⊳ b ⊲ cr(Q2 ) = Q1 ⊳ b ⊲ Q2 .
by IH
The equalities for R follow in a similar way. If Q = Q1 ⊳ b ⊲ Q1 and R ∈ {T, F}, the lemma’s equalities follow in a similar way, and this is also the case if Q ∈ {T, F} and R = Q1 ⊳ b ⊲ Q1 . ⊓ ⊔ With Lemma 4.5 we can easily prove the following result. Proposition 4.6 (crbf is a normalization function). For each P ∈ CA , crbf (P ) is a cr-basic form, and for each cr-basic form P , crbf (P ) = P . Proof. The first statement is Lemma 4.4. For the second statement, it suffices by Lemma 2.12 to prove that cr(P ) = P . We prove this by case distinction on P . The cases P ∈ {T, F} follow immediately, and otherwise P = P1 ⊳ a ⊲ P2 , and thus cr(P ) = cr(ha (P1 )) ⊳ a ⊲ cr(ja (P2 )). By Lemma 4.5, cr(ha (P1 )) = P1 and cr(ja (P2 )) = P2 , hence cr(P ) = P . ⊓ ⊔ Lemma 4.7. For all P ∈ BFA , CPcr (A) ⊢ P = cr(P ). Proof. We apply structural induction on P . The base cases P ∈ {T, F} are trivial. Assume P = P1 ⊳ a ⊲ P2 . By induction CPcr (A) ⊢ Pi = cr(Pi ). Furthermore, by auxiliary result (4) in the proof of Lemma 4.4, cr(P ) is a cr-basic form, and by Lemma 4.5, cr(P ) = cr(ha (P1 )) ⊳ a ⊲ cr(ja (P2 )) = cr(P1 ) ⊳ a ⊲ cr(P2 ).
(5)
We derive CPrp (A) ⊢ P1 ⊳ a ⊲ P2 = cr(P1 ) ⊳ a ⊲ cr(P2 ) = cr(ha (P1 )) ⊳ a ⊲ cr(ja (P2 )) = cr(P1 ⊳ a ⊲ P2 ).
by IH by (5) ⊓ ⊔
17
Theorem 4.8. For all P ∈ CA , CPcr (A) ⊢ P = crbf (P ). Proof. By Corollary 2.17, CPcr (A) ⊢ P = bf (P ), and by Lemma 4.7, CPcr (A) ⊢ bf (P ) = crbf (bf (P )). By Lemma 2.12, crbf (bf (P )) = crbf (P ). ⊓ ⊔ Definition 4.9. The binary relation =crbf on CA is defined as follows: P =crbf Q ⇐⇒ crbf (P ) = crbf (Q). Theorem 4.10. For all P, Q ∈ CA , CPcr (A) ⊢ P = Q ⇐⇒ P =crbf Q. Proof. Assume CPcr (A) ⊢ P = Q. Then, by Theorem 4.8, CPcr (A) ⊢ crbf (P ) = crbf (Q). In [4] the following two statements are proved (Theorem 6.4 and an auxiliary result in its proof), where =cr is a binary relation on CA : 1. For all P, Q ∈ CA , CPcr (A) ⊢ P = Q ⇐⇒ P =cr Q. 2. For all cr-basic forms P and Q, P =cr Q ⇒ P = Q. By Lemma 4.4, these statements imply crbf (P ) = crbf (Q), that is, P =crbf Q. Assume P =crbf Q. By Lemma 2.12, bf (crbf (P )) = bf (crbf (Q)). By Theorem 2.16, CP ⊢ crbf (P ) = crbf (Q). By Theorem 4.8, CPcr (A) ⊢ P = Q. ⊓ ⊔ Hence, the relation =crbf is a congruence on CA that is axiomatized by CPcr (A). With this observation in mind, we define a transformation on evaluation trees that mimics the function crbf , and prove that equality of two such transformed trees characterizes the congruence that is axiomatized by CPcr (A). Definition 4.11. The unary contractive evaluation function crse : CA → TA yields contractive evaluation trees and is defined by crse(P ) = cre (se(P )). The auxiliary function cre : TA → TA is defined as follows (a ∈ A): cre (T) = T, cre (F) = F, cre (X ⊳ a ⊲ Y ) = cre (Ha (X)) ⊳ a ⊲ cre (Ja (Y )). For a ∈ A, the auxiliary functions Ha : TA → TA and Ja : TA → TA are defined by Ha (T) = T, Ha (F) = F, ( Ha (X) Ha (X ⊳ b ⊲ Y ) = X ⊳b⊲Y
if b = a, otherwise,
Ja (T) = T, Ja (F) = F, ( Ja (Y ) Ja (X ⊳ b ⊲ Y ) = X ⊳b⊲Y
if b = a, otherwise.
18
As a simple example we depict se((a ⊳ a ⊲ F) ⊳ a ⊲ F) and the contractive evaluation tree crse((a ⊳ a ⊲ F) ⊳ a ⊲ F): a
a
a a T
F
T
F
F F
The similarities between the evaluation function crse and the function crbf can be exploited. Lemma 4.12. For all P ∈ BFA and for all a ∈ A, cre (Ha (se(P ))) = se(cr(ha (P )))
and
cre (Ja (se(P ))) = se(cr(ja (P ))).
Proof. We prove the first equality by structural induction on P . The base cases P ∈ {T, F} are trivial. For the inductive case P = Q ⊳ a ⊲ R, let b ∈ A. We have to distinguish the cases b = a and b 6= a. If b = a, then cre (Ha (se(Q ⊳ a ⊲ R))) = cre (Ha (se(Q) ⊳ a ⊲ se(R))) = cre (Ha (se(Q))) = se(cr(ha (Q))) = se(cr(ha (Q ⊳ a ⊲ R))).
by IH
If b 6= a, then cre (Hb (se(Q ⊳ a ⊲ R))) = cre (Hb (se(Q) ⊳ a ⊲ se(R))) = cre (se(Q) ⊳ a ⊲ se(R)) = cre (Ha (se(Q))) ⊳ a ⊲ cre (Ja (se(R))) = se(cr(ha (Q))) ⊳ a ⊲ se(cr(ja (R)))
by IH
= se(cr(ha (Q)) ⊳ a ⊲ cr(ja (R))) = se(cr(Q ⊳ a ⊲ R)) = se(cr(hb (Q ⊳ a ⊲ R))). ⊓ ⊔
The second equality can be proved in a similar way. We use the following lemma’s in the proof of our next completeness result. Lemma 4.13. For all P ∈ BFA , cre (se(P )) = se(cr(P )).
Proof. By a case distinction on P . The cases P ∈ {T, F} follow immediately, and otherwise P = Q ⊳ a ⊲ R, and thus cre (se(Q ⊳ a ⊲ R)) = cre (se(Q) ⊳ a ⊲ se(R)) = cre (Ha (se(Q))) ⊳ a ⊲ cre (Ja (se(R))) = se(cr(ha (Q))) ⊳ a ⊲ se(cr(ja (R)))
by Lemma 4.12
= se(cr(ha (Q)) ⊳ a ⊲ cr(ja (R))) = se(cr(Q ⊳ a ⊲ R)). ⊓ ⊔ 19
Definition 4.14. Contractive valuation congruence, notation =crse , is defined on CA as follows: P =crse Q ⇐⇒ crse(P ) = crse(Q). The following characterization result immediately implies that =crse is a congruence relation on CA (and hence justifies calling it a congruence). Proposition 4.15. For all P, Q ∈ CA , CPcr (A) ⊢ P =crse Q ⇐⇒ P =crbf Q. Proof. In order to prove ⇒, assume crse(P ) = crse(Q), thus cre (se(P )) = cre (se(Q)). By Corollary 2.17 and Theorem 2.9, cre (se(bf (P ))) = cre (se(bf (Q))), so by Lemma 4.13, se(cr(bf (P ))) = se(cr(bf (Q))). By Lemma 2.5, it follows that cr(bf (P )) = cr(bf (Q)), that is, P =crbf Q. In order to prove ⇐, assume P =crbf Q, thus cr(bf (P )) = cr(bf (Q)). Then se(cr(bf (P ))) = se(cr(bf (Q))) and by Lemma 4.13, cre (se(bf (P ))) = cre (se(bf (Q))). By Corollary 2.17 and Theorem 2.9, se(bf (P )) = se(P ) and se(bf (Q)) = se(Q), and thus cre (se(P )) = cre (se(Q)), that is, P =crse Q. ⊓ ⊔ Our final result in this section is a completeness result for contractive valuation congruence. Theorem 4.16 (Completeness of CPcr (A)). For all P, Q ∈ CA , CPcr (A) ⊢ P = Q ⇐⇒ P =crse Q. Proof. Combine Theorem 4.10 and Proposition 4.15.
5
⊓ ⊔
Evaluation trees for memorizing valuation congruence
In [4] we introduced CPmem , memorizing CP, as the extension of CP with the following axiom: x ⊳ y ⊲ (z ⊳ u ⊲ (v ⊳ y ⊲ w)) = x ⊳ y ⊲ (z ⊳ u ⊲ w).
(CPmem)
The axiom (CPmem) expresses that the first evaluation value of y is memorized. More precisely, a “memorizing evaluation” is one with the property that upon the evaluation of a compound propositional statement, the first evaluation value of each atom is memorized throughout the evaluation. We write CPmem for the set CP ∪ {(CPmem)} of axioms. Replacing the variable y in axiom (CPmem) by F ⊳ y ⊲ T and/or the variable u by F ⊳ u ⊲ T yields all other memorizing patterns: (z ⊳ u ⊲ (w ⊳ y ⊲ v)) ⊳ y ⊲ x = (z ⊳ u ⊲ w) ⊳ y ⊲ x,
(CPm1)
x ⊳ y ⊲ ((v ⊳ y ⊲ w) ⊳ u ⊲ z) = x ⊳ y ⊲ (w ⊳ u ⊲ z), ((w ⊳ y ⊲ v) ⊳ u ⊲ z) ⊳ y ⊲ x = (w ⊳ u ⊲ z) ⊳ y ⊲ x.
(CPm2) (CPm3)
Furthermore, if we replace in axiom (CPmem) u by F, we find the contraction law x ⊳ y ⊲ (v ⊳ y ⊲ w) = x ⊳ y ⊲ w, 20
(6)
and replacing y by F ⊳ y ⊲ T then yields the dual contraction law (w ⊳ y ⊲ v) ⊳ y ⊲ x = w ⊳ y ⊲ x.
(7)
Hence, CPmem is an axiomatic extension of CPcr (A). We define a proper subset of basic forms with the property that each propositional statement can be proved equal to such a basic form. Definition 5.1. Let A′ be a subset of A. Mem-basic forms over A′ are inductively defined: – T and F are mem-basic forms over A′ , and – P ⊳ a ⊲ Q is a mem-basic form over A′ if a ∈ A′ and P and Q are mem-basic forms over A′ \ {a}. P is a mem-basic form if for some A′ ⊂ A, P is a mem-basic form over A′ . Note that if A is finite, the number of mem-basic forms is also finite. It will turn out useful to define a function that transforms conditional statements into mem-basic forms. Definition 5.2. The mem-basic form function membf : CA → CA is defined by membf (P ) = mem(bf (P )). The auxiliary function mem : BFA → BFA is defined as follows: mem(T) = T mem(F) = F, mem(P ⊳ a ⊲ Q) = mem(ℓa (P )) ⊳ a ⊲ mem(ra (Q)). For a ∈ A, the auxiliary functions ℓa : BFA → BFA and ra : BFA → BFA are defined by ℓa (T) = T, ℓa (F) = F, ( ℓa (P ) ℓa (P ⊳ b ⊲ Q) = ℓa (P ) ⊳ b ⊲ ℓa (Q)
if b = a, otherwise,
ra (T) = T, ra (F) = F, ( ra (Q) ra (P ⊳ b ⊲ Q) = ra (P ) ⊳ b ⊲ ra (Q)
if b = a, otherwise.
Thus, membf maps a conditional statement P to bf (P ) and then transforms bf (P ) according to the auxiliary functions mem, ℓa , and ra . We will use the following equalities. Lemma 5.3. For all a, b ∈ A with a 6= b and P ∈ BFA , ℓa (ℓb (P )) = ℓb (ℓa (P )), ra (ℓb (P )) = ℓb (ra (P )),
(5.3.1) (5.3.2)
ra (rb (P )) = rb (ra (P )), ℓa (rb (P )) = rb (ℓa (P )).
(5.3.3) (5.3.4)
21
Proof. By structural induction on P . The base cases P ∈ {T, F} are trivial. For the inductive case P = Q ⊳ c ⊲ R we have to distinguish three cases: 1. If c = a, then equality (5.3.1) follows by ℓa (ℓb (Q ⊳ a ⊲ R)) = ℓa (ℓb (Q) ⊳ a ⊲ ℓb (R)) = ℓa (ℓb (Q)) = ℓb (ℓa (Q))
by IH
= ℓb (ℓa (Q ⊳ a ⊲ R)), and equality (5.3.2) follows by ra (ℓb (Q ⊳ a ⊲ R)) = ra (ℓb (Q) ⊳ a ⊲ ℓb (R)) = ra (ℓb (R)) = ℓb (ra (R))
by IH
= ℓb (ra (Q ⊳ a ⊲ R)). Equalities (5.3.3) and (5.3.4) can be proved in a similar way. 2. If c = b, then equality (5.3.1) follows by ℓa (ℓb (Q ⊳ b ⊲ R)) = ℓa (ℓb (Q)) = ℓb (ℓa (Q)) = ℓb (ℓa (Q) ⊳ b ⊲ ℓa (R))
by IH
= ℓb (ℓa (Q ⊳ b ⊲ R)), and equality (5.3.2) follows by ra (ℓb (Q ⊳ b ⊲ R)) = ra (ℓb (Q)) = ℓb (ra (Q)) = ℓb (ra (Q) ⊳ b ⊲ ra (R))
by IH
= ℓb (ra (Q ⊳ b ⊲ R)). Equalities (5.3.3) and (5.3.4) can be proved in a similar way. 3. If c 6∈ {a, b}, then equality (5.3.1) follows by ℓa (ℓb (Q ⊳ c ⊲ R)) = ℓa (ℓb (Q) ⊳ c ⊲ ℓb (R)) = ℓa (ℓb (Q)) ⊳ c ⊲ ℓa (ℓb (R)) = ℓb (ℓa (Q)) ⊳ c ⊲ ℓb (ℓa (R))
by IH
= ℓb (ℓa (Q ⊳ c ⊲ R)), Equalities (5.3.2) − (5.3.4) can be proved in a similar way. ⊓ ⊔ Lemma 5.4. For all a ∈ A and P ∈ BFA , d(P ) ≥ d(ℓa (P )) and d(P ) ≥ d(ra (P )). 22
Proof. Fix some a ∈ A. We prove these inequalities by structural induction on P . The base cases P ∈ {T, F} are trivial. For the inductive case P = Q ⊳ b ⊲ R we have to distinguish the cases b = a and b 6= a. If b = a, then d(Q ⊳ a ⊲ R) = 1 + max{d(Q), d(R)} ≥ 1 + d(Q) ≥ 1 + d(ℓa (Q)) = 1 + d(ℓa (Q ⊳ a ⊲ R)),
by IH
and d(Q ⊳ a ⊲ R) ≥ d(ra (Q ⊳ a ⊲ R)) follows in a similar way. If b 6= a, then d(Q ⊳ b ⊲ R) = 1 + max{d(Q), d(R)} ≥ 1 + max{d(ℓa (Q)), d(ℓa (R))} = d(ℓa (Q) ⊳ b ⊲ ℓa (R))
by IH
= d(ℓa (Q ⊳ b ⊲ R)), and d(Q ⊳ b ⊲ R) ≥ d(ra (Q ⊳ b ⊲ R)) follows in a similar way.
⊓ ⊔
Lemma 5.5. For all P ∈ CA , membf (P ) is a mem-basic form. Proof. We first prove an auxiliary result: For all P ∈ BFA , mem(P ) is a mem-basic form.
(8)
This follows by induction on the depth d(P ) of P . If d(P ) = 0, then P ∈ {T, F}, and hence mem(P ) = P is a mem-basic form. For the inductive case d(P ) = n + 1 it must be the case that P = Q ⊳ a ⊲ R. We find mem(Q ⊳ a ⊲ R) = mem(ℓa (Q)) ⊳ a ⊲ mem(ra (R)), which is a mem-basic form because by Lemma 5.4, ℓa (Q) and ra (R) are basic forms with depth smaller than or equal to n, so by the induction hypothesis, mem(ℓa (Q)) is a mem-basic form over AQ and mem(ra (R)) is a mem-basic form over AR (for suitable subsets AQ and AR of A). Notice that by definition of ℓa and ra , the atom a does not occur in AQ ∪ AR . Hence, mem(ℓa (Q)) ⊳ a ⊲ mem(ra (R)) is a mem-basic form over AQ ∪ AR ∪ {a}, which completes the proof of (8). The lemma’s statement now follows by structural induction: the base cases (comprising a single atom a) are again trivial, and for the inductive case, membf (P ⊳ Q ⊲ R) = mem(bf (P ⊳ Q ⊲ R)) = mem(S) for some basic form S by Lemma 2.11, and by (8), mem(S) is a mem-basic form.
⊓ ⊔
The following lemma is used repeatedly. Lemma 5.6. If Q ⊳ a ⊲ R is a mem-basic form, then Q = mem(Q) = mem(ℓa (Q)) and R = mem(R) = mem(ra (R)). 23
Proof. Assume Q ⊳ a ⊲ R is a mem-basic form over A′ . By definition, Q and R are mem-basic forms over A′ \ {a}. We prove both pairs of equalities simultaneously by induction on the structure of Q and R. The base case, thus Q, R ∈ {T, F}, is trivial. If Q = Q1 ⊳ b ⊲ Q2 and R = R1 ⊳ c ⊲ R2 , then ℓa (Q) = Q and ra (R) = R. Moreover, the Qi are mem-basic forms over A′ \ {a, b}, hence ℓb (Q1 ) = Q1 and rb (Q2 ) = Q2 , and thus mem(Q) = mem(ℓb (Q1 )) ⊳ b ⊲ mem(rb (Q2 )) = mem(Q1 ) ⊳ b ⊲ mem(Q2 ) = Q1 ⊳ b ⊲ Q2 .
by IH
The equalities for R follow in a similar way. If Q = Q1 ⊳ b ⊲ Q1 and R ∈ {T, F}, the lemma’s equalities follow in a similar way, and this is also the case if Q ∈ {T, F} and R = Q1 ⊳ b ⊲ Q1 . ⊓ ⊔ With Lemma 5.6 we can easily prove the following result. Proposition 5.7 (membf is a normalization function). For each P ∈ CA , membf (P ) is a mem-basic form , and for each mem-basic form P , membf (P ) = P . Proof. The first statement is Lemma 5.5. For the second statement, it suffices by Lemma 2.12 to prove that mem(P ) = P . We prove this by case distinction on P . The cases P ∈ {T, F} follow immediately, and otherwise P = P1 ⊳ a ⊲ P2 , and thus mem(P ) = mem(ℓa (P1 )) ⊳ a ⊲ cr(ra (P2 )). By Lemma 5.6, mem(ℓa (P1 )) = P1 and mem(ra (P2 )) = P2 , hence mem(P ) = P . ⊓ ⊔ Lemma 5.8. For all P ∈ BFA , CPmem ⊢ P = mem(P ). Proof. We apply structural induction on P . The base cases P ∈ {T, F} are trivial. Assume P = P1 ⊳ a ⊲ P2 . By induction CPmem ⊢ Pi = mem(Pi ). Furthermore, by auxiliary result (8) in the proof of Lemma 5.5, mem(P ) is a mem-basic form, and mem(Pi ) are mem-basic forms over A \ {a}, and thus mem(P ) = mem(ℓa (P1 )) ⊳ a ⊲ mem(ra (P2 )) = mem(P1 ) ⊳ a ⊲ mem(P2 ).
by Lemma 5.6
(9)
We derive CPmem ⊢ P1 ⊳ a ⊲ P2 = mem(P1 ) ⊳ a ⊲ mem(P2 ) = mem(ℓa (P1 )) ⊳ a ⊲ mem(ra (P2 ))
by IH by (9)
= mem(P1 ⊳ a ⊲ P2 ). ⊓ ⊔ Theorem 5.9. For all P ∈ CA , CPmem ⊢ P = membf (P ). Proof. By Corollary 2.17, CPmem ⊢ P = bf (P ), and by Lemma 5.8, CPmem ⊢ bf (P ) = membf (bf (P )). By Lemma 2.12, membf (bf (P )) = membf (P ). ⊓ ⊔ Definition 5.10. The binary relation =membf on CA is defined as follows: P =membf Q ⇐⇒ membf (P ) = membf (Q). 24
Theorem 5.11. For all P, Q ∈ CA , CPmem ⊢ P = Q ⇐⇒ P =membf Q. Proof. Assume CPmem ⊢ P = Q. Then, by Theorem 5.9, CPmem ⊢ membf (P ) = membf (Q). In [4] the following two statements are proved (Theorem 8.1 and Lemma 8.4), where =mem is a binary relation on CA : 1. For all P, Q ∈ CA , CPmem ⊢ P = Q ⇐⇒ P =mem Q. 2. For all mem-basic forms P and Q, P =mem Q ⇒ P = Q. By Lemma 5.5 these statements imply membf (P ) = membf (Q), that is, P =membf Q. Assume P =membf Q. By Lemma 2.12, bf (membf (P )) = bf (membf (Q)). By Theorem 2.16, CP ⊢ membf (P ) = membf (Q). By Theorem 5.9, CPmem ⊢ P = Q. ⊓ ⊔ So, the relation =membf is a congruence on CA that is axiomatized by CPmem . With this observation in mind, we define a transformation on evaluation trees that mimics the function membf , and prove that equality of two such transformed trees characterizes the congruence that is axiomatized by CPmem . Definition 5.12. The unary memorizing evaluation function memse : CA → TA yields memorizing evaluation trees and is defined by memse(P ) = meme (se(P )). The auxiliary function meme : TA → TA is defined as follows (a ∈ A): meme (T) = T, meme (F) = F, meme (X ⊳ a ⊲ Y ) = meme (La (X)) ⊳ a ⊲ meme (Ra (Y )). For a ∈ A, the auxiliary functions La : TA → TA and Ra : TA → TA are defined by La (T) = T, La (F) = F, ( La (X) La (X ⊳ b ⊲ Y ) = La (X) ⊳ b ⊲ L(Y )
if b = a, otherwise,
Ra (T) = T, Ra (F) = F, ( Ra (Y ) Ra (X ⊳ b ⊲ Y ) = Ra (X) ⊳ b ⊲ Ra (Y )
if b = a, otherwise.
As a simple example we depict se((a ⊳ b ⊲ F) ⊳ a ⊲ F) and the memorizing evaluation tree memse((a ⊳ b ⊲ F) ⊳ a ⊲ F): a
a b a T
b
F T
F F 25
F F
The similarities between memse and the function membf will of course be exploited. Lemma 5.13. For all a, b ∈ A with a 6= b and X ∈ TA , 1. 2. 3. 4.
La (Lb (X)) = Lb (La (X)), Ra (Lb (X)) = Lb (Ra (X)), Ra (Rb (X)) = Rb (Ra (X)), La (Rb (X)) = Rb (La (X)). ⊓ ⊔
Proof. By structural induction on X (cf. the proof of Lemma 5.3). We use the following lemma’s in the proof of our next completeness result. Lemma 5.14. For all a ∈ A and P ∈ BFA , meme (La (se(P ))) = se(mem(ℓa (P )))
and
meme (Ra (se(P ))) = se(mem(ra (P ))).
Proof. We first prove an auxiliary result: For all a ∈ A and P ∈ BFA , La (se(P )) = se(ℓa (P )) and Ra (se(P )) = se(ra (P )).
(10)
Fix some a ∈ A. We prove (10) by structural induction on P . The base cases P ∈ {T, F} are trivial. For the inductive case P = Q ⊳ b ⊲ R we have to distinguish the cases b = a and b 6= a. If b = a, then La (se(Q ⊳ a ⊲ R)) = La (se(Q) ⊳ a ⊲ se(R)) = La (se(Q)) = se(ℓa (Q)) = se(ℓa (Q ⊳ a ⊲ R)),
by IH
and if b 6= a, then La (se(Q ⊳ b ⊲ R)) = La (se(Q) ⊳ b ⊲ se(R)) = La (se(Q)) ⊳ b ⊲ La (se(R)) = se(ℓa (Q)) ⊳ b ⊲ se(ℓa (R))
by IH
= se(ℓa (Q ⊳ b ⊲ R)). This finishes the proof of (10). We now prove the lemma’s equalities. Fix some a ∈ A. We prove the first equality by induction on d(P ). The base case d(P ) = 0, thus P ∈ {T, F}, is trivial. For the inductive case d(P ) = n + 1, it must be the case that P = Q ⊳ b ⊲ R. We have to distinguish the cases b = a and b 6= a. If b = a, then meme (La (se(Q ⊳ a ⊲ R))) = meme (La (se(Q) ⊳ a ⊲ se(R))) = meme (La (se(Q))) = se(mem(ℓa (Q))) = se(mem(ℓa (Q ⊳ a ⊲ R))). 26
by IH
If b 6= a, then meme (La (se(Q ⊳ b ⊲ R))) = meme (La (se(Q) ⊳ b ⊲ se(R))) = meme (La (se(Q)) ⊳ b ⊲ La (se(R))) = meme (Lb (La (se(Q)))) ⊳ b ⊲ meme (Rb (La (se(R)))) = meme (La (Lb (se(Q)))) ⊳ b ⊲ meme (La (Rb (se(R)))) = meme (La (se(ℓb (Q)))) ⊳ b ⊲ meme (La (se(rb (R))))
by Lemma 5.13
= se(mem(ℓa (ℓb (Q)))) ⊳ b ⊲ se(mem(ℓa (rb (R))))
by (10) by IH
= se(mem(ℓb (ℓa (Q)))) ⊳ b ⊲ se(mem(rb (ℓa (R))))
by Lemma 5.3
= se(mem(ℓa (Q) ⊳ b ⊲ ℓa (R))) = se(mem(ℓa (Q ⊳ b ⊲ R))). ⊓ ⊔
The second equality can be proved in a similar way. Lemma 5.15. For all P ∈ BFA , meme (se(P )) = se(mem(P )).
Proof. By a case distinction on P . The cases P ∈ {T, F} follow immediately, and otherwise P = Q ⊳ a ⊲ R, and thus meme (se(Q ⊳ a ⊲ R)) = meme (se(Q) ⊳ a ⊲ se(R)) = meme (La (se(Q))) ⊳ a ⊲ meme (Ra (se(R))) = se(mem(ℓa (Q))) ⊳ a ⊲ se(mem(ra (R))) = se(mem(ℓa (Q)) ⊳ a ⊲ mem(ra (R)))
by Lemma 5.14
= se(mem(Q ⊳ a ⊲ R)). ⊓ ⊔ Definition 5.16. Memorizing valuation congruence, notation =memse , is defined on CA as follows: P =memse Q ⇐⇒ memse(P ) = memse(Q). The following characterization result immediately implies that =memse is a congruence relation on CA (and hence justifies calling it a congruence). Proposition 5.17. For all P, Q ∈ CA , P =memse Q ⇐⇒ P =membf Q. Proof. For ⇒, assume memse(P ) = memse(Q), thus meme (se(P )) = meme (se(Q)). By Corollary 2.17 and Theorem 2.9, meme (se(bf (P ))) = meme (se(bf (Q))), so by Lemma 5.15, se(mem(bf (P ))) = se(mem(bf (Q))). By Lemma 2.5, it follows that mem(bf (P )) = mem(bf (Q)), that is, P =membf Q. In order to prove ⇐, assume P =membf Q, thus mem(bf (P )) = mem(bf (Q)). Then se(mem(bf (P ))) = se(mem(bf (Q))) and by Lemma 5.15, meme (se(bf (P ))) = meme (se(bf (Q))). By Corollary 2.17 and Theorem 2.9, meme (se(P )) = meme (se(Q)), that is, P =memse Q. ⊓ ⊔ 27
We end this section with a completeness result for memorizing valuation congruence. Theorem 5.18 (Completeness of CPmem ). For all P, Q ∈ CA , CPmem ⊢ P = Q ⇐⇒ P =memse Q. ⊓ ⊔
Proof. Combine Theorem 5.11 and Proposition 5.17.
6
Evaluation trees for static valuation congruence
The most identifying axiomatic extension of CP we consider can be defined by adding the following axiom to CPmem : F ⊳ x ⊲ F = F. (CPs) By axiom (CPs), no atom a can have a side effect because T ⊳ (F ⊳ a ⊲ F) ⊲ P = T ⊳ F ⊲ P = P for all P ∈ CA . So, the evaluation value of each atom in a conditional statement is memorized. Below we argue that the order of atomic evaluations is irrelevant. We write CPstat for the set of these axioms, thus CPstat = CPmem ∪ {(CPs)} = CP ∪ {(CPmem), (CPs)}. Lemma 6.1. For all P, Q ∈ CA , CPstat ⊢ P = P ⊳ Q ⊲ P . Proof. CPstat ⊢ P = T ⊳ (F ⊳ Q ⊲ F) ⊲ P = (T ⊳ F ⊲ P ) ⊳ Q ⊲ (T ⊳ F ⊲ P ) = P ⊳ Q ⊲ P.
by axioms (CPs) and (CP2) by axiom (CP4) by axiom (CP2) ⊓ ⊔
Observe that the duality principle also holds in CPstat , in particular, CPstat ⊢ T ⊳ x ⊲ T = T. A simple example on CPstat illustrates how the order of evaluation of x and y can be swapped: x ⊳ y ⊲ F = y ⊳ x ⊲ F.
(11)
Equation (11) can be derived as follows: CPstat ⊢ x ⊳ y ⊲ F = T ⊳ F ⊲ (x ⊳ y ⊲ F)
by (CP2)
= T ⊳ (F ⊳ x ⊲ F) ⊲ (x ⊳ y ⊲ F) = (x ⊳ y ⊲ F) ⊳ x ⊲ (x ⊳ y ⊲ F)
by (CPs) by (CP2), (CP4)
= ((T ⊳ x ⊲ F) ⊳ y ⊲ F) ⊳ x ⊲ ((T ⊳ x ⊲ F) ⊳ y ⊲ F) = (T ⊳ y ⊲ F) ⊳ x ⊲ (F ⊳ y ⊲ F)
by (CP3) by (CPm3),(CPm2)
= y ⊳ x ⊲ F.
by (CP3), (CPs)
In [4] we defined CPst as the extension of CP with the following two axioms: (x ⊳ y ⊲ z) ⊳ u ⊲ v = (x ⊳ u ⊲ v) ⊳ y ⊲ (z ⊳ u ⊲ v), (x ⊳ y ⊲ z) ⊳ y ⊲ u = x ⊳ y ⊲ u.
(CPstat) (the contraction law (7))
Axiom (CPstat) expresses how the order of evaluation of u and y can be swapped, and (as explained Section 5) the contraction law (7) expresses that the evaluation result of y is memorized. Because we will rely on results for CPst that are proven in [4], we first prove the following result. 28
Proposition 6.2. The axiom sets CPst and CPstat are equally strong. Proof. We show that all axioms in the one set are derivable from the other set. We first prove that the axiom (CPmem) is derivable from CPst : CPst ⊢ x ⊳ y ⊲ (z ⊳ u ⊲ (v ⊳ y ⊲ w)) = x ⊳ y ⊲ ((v ⊳ y ⊲ w) ⊳ (F ⊳ u ⊲ T) ⊲ z)
by (CP4)
= x ⊳ y ⊲ ((v ⊳ (F ⊳ u ⊲ T) ⊲ z) ⊳ y ⊲ (w ⊳ (F ⊳ u ⊲ T) ⊲ z)) = x ⊳ y ⊲ (w ⊳ (F ⊳ u ⊲ T) ⊲ z)
by (CPstat) by (6)
= x ⊳ y ⊲ (z ⊳ u ⊲ w),
by (CP4)
where the contraction law (6), that is x ⊳ y ⊲ (v ⊳ y ⊲ w) = x ⊳ y ⊲ w, is derivable from CPst : replace y by F ⊳ y ⊲ T in (7). Hence CPstat ⊢ (CPmem). Furthermore, if we take u = v = F in axiom (CPstat) we find F ⊳ x ⊲ F = F, hence CPst ⊢ CPstat . In order to show that CPstat ⊢ CPst recall that the contraction law (7) is derivable from CPmem (see Section 5). So, it remains to be proved that CPstat ⊢ (CPstat) and with equation (11) we can easily derive this axiom from CPstat : CPstat ⊢ (x ⊳ y ⊲ z) ⊳ u ⊲ v = (x ⊳ y ⊲ (z ⊳ u ⊲ v)) ⊳ u ⊲ v
by (CPm1)
= (x ⊳ y ⊲ (z ⊳ u ⊲ v)) ⊳ u ⊲ (z ⊳ u ⊲ v) = x ⊳ (y ⊳ u ⊲ F) ⊲ (z ⊳ u ⊲ v)
by (6) by (CP4)
= x ⊳ (u ⊳ y ⊲ F) ⊲ (z ⊳ u ⊲ v) = (x ⊳ u ⊲ (z ⊳ u ⊲ v)) ⊳ y ⊲ (z ⊳ u ⊲ v)
by (11) by (CP4)
= (x ⊳ u ⊲ v) ⊳ y ⊲ (z ⊳ u ⊲ v).
by (6) ⊓ ⊔
We define a proper subset of basic forms with the property that each propositional statement can be proved equal to such a basic form. This is more complicated than for the valuation congruences discussed before, because the order of atoms is now irrelevant, which can be inferred from equation (11). Definition 6.3. Let Au ⊂ A∗ be the set of strings over A with the property that each σ ∈ Au contains no multiple occurrences of the same atom.4 St-basic forms over σ ∈ Au are defined as follows: – T and F are st-basic forms over ǫ. – P ⊳ a ⊲ Q is an st-basic form over ρa ∈ Au if P and Q are st-basic forms over ρ. P is an st-basic form if for some σ ∈ Au , P is an st-basic form over σ. For example, an st-basic form over ab ∈ A∗ has the following form: (B1 ⊳ a ⊲ B2 ) ⊳ b ⊲ (B3 ⊳ a ⊲ B4 ) with Bi ∈ {T, F}. If σ = a1 a2 · · · an , there exist 22 different st-basic forms over σ. n
4
Recall that we write ǫ for the empty string, thus ǫ ∈ Au .
29
It will turn out useful to define a function that transforms conditional statements to st-basic forms. Therefore, given σ ∈ Au we consider terms in CA′ , where A′ is the finite subset of A that contains the elements of σ. If σ = ǫ, then A′ = ∅ and the st-basic forms over ǫ are T and F. Definition 6.4. The alphabet function α : Au → 2A returns the set of atoms of a string in Au : α(ǫ) = ∅, and α(σa) = α(σ) ∪ {a}. Definition 6.5. Let σ ∈ Au . The conditional statement E σ ∈ BFα(σ) is defined as Eǫ = F
Eσ = Eρ ⊳ a ⊲ Eρ.
and, if σ = ρa,
The st-basic form function stbf σ : Cα(σ) → Cα(σ) is defined by stbf σ (P ) = membf (T ⊳ E σ ⊲ P ). So, for each σ ∈ Au , E σ is an st-basic form over σ in which the constant T does not occur, e.g., E ab = (F ⊳ a ⊲ F) ⊳ b ⊲ (F ⊳ a ⊲ F). Lemma 6.6. Let σ ∈ Au . For all P ∈ CA , CPstat ⊢ P = T ⊳ E σ ⊲ P . Proof. By induction on the structure of σ. If σ = ǫ, then E σ = F and CPstat ⊢ P = T ⊳ E ǫ ⊲ P. If σ = ρa for some ρ ∈ Au and a ∈ A, then E σ = E ρ ⊳ a ⊲ E ρ , and hence CPstat ⊢ P = P ⊳ a ⊲ P
by Lemma 6.1
ρ
ρ
= (T ⊳ E ⊲ P ) ⊳ a ⊲ (T ⊳ E ⊲ P ) = T ⊳ (E ρ ⊳ a ⊲ E ρ ) ⊲ P.
by IH by axiom (CP4) ⊓ ⊔ u
Proposition 6.7 (stbf σ is a normalization function on Cα(σ) ). Let σ ∈ A . For all P ∈ Cα(σ) , stbf σ (P ) is an st-basic form, and for each st-basic form P over σ, stbfσ (P ) = P . Proof. We prove the first statement by induction on the structure of σ. If σ = ǫ, then P contains no atoms. Hence, bf (P ) ∈ {T, F}, and stbf ǫ (P ) = membf (T ⊳ F ⊲ P ) = membf (P ) = mem(bf (P )) ∈ {T, F}. If σ = ρa for some ρ ∈ Au and a ∈ A, then stbf σ (P ) = membf (T ⊳ E σ ⊲ P ) = mem(bf (T ⊳ E σ ⊲ P )) = mem(E σ [F 7→ bf (P )]) = mem((E ρ ⊳ a ⊲ E ρ )[F 7→ bf (P )]) = mem(E ρ [F 7→ bf (P )] ⊳ a ⊲ E ρ [F 7→ bf (P )]) = mem(ℓa (E ρ [F 7→ bf (P )])) ⊳ a ⊲ mem(ra (E ρ [F 7→ bf (P )])) = mem(E ρ [F 7→ ℓa (bf (P ))]) ⊳ a ⊲ mem(E ρ [F 7→ ra (bf (P ))]) = mem(bf (T ⊳ E ρ ⊲ ℓa (P ))) ⊳ a ⊲ mem(bf (T ⊳ E ρ ⊲ ra (P ))) = stbf ρ (ℓa (P )) ⊳ a ⊲ stbf ρ (ra (P )). 30
Now both ℓa (bf (P )) and ra (bf (P )) are conditional statements in Cα(ρ) (thus, not containing a), so by induction stbf ρ (ℓa (P )) and stbf ρ (ra (P )) are st-basic forms over ρ. Hence, stbf σ (P ) is an st-basic form over σ. The second statement follows also by induction on the structure of σ. The base case σ = ǫ, thus P ∈ {T, F}, is trivial. If σ = ρa for some ρ ∈ Au and a ∈ A, then P = P1 ⊳ a ⊲ P2 for st-basic forms Pi over ρ. By induction, stbf ρ (Pi ) = Pi , thus membf (T ⊳ E ρ ⊲ Pi ) = Pi . We find stbf σ (P ) = membf (T ⊳ (E ρ ⊳ a ⊲ E ρ ) ⊲ (P1 ⊳ a ⊲ P2 )) = mem(bf ((T ⊳ E ρ ⊲ (P1 ⊳ a ⊲ P2 )) ⊳ a ⊲ (T ⊳ E ρ ⊲ (P1 ⊳ a ⊲ P2 )))) = mem(bf (T ⊳ E ρ ⊲ (P1 ⊳ a ⊲ P2 )) ⊳ a ⊲ bf (T ⊳ E ρ ⊲ (P1 ⊳ a ⊲ P2 ))) = mem(ℓa (bf (T ⊳ E ρ ⊲ (P1 ⊳ a ⊲ P2 )))) ⊳ a ⊲ mem(ra (bf (T ⊳ E ρ ⊲ (P1 ⊳ a ⊲ P2 )))) = mem(ℓa (T ⊳ E ρ ⊲ (P1 ⊳ a ⊲ P2 ))) ⊳ a ⊲ mem(ra (T ⊳ E ρ ⊲ (P1 ⊳ a ⊲ P2 ))) = mem(T ⊳ E ρ ⊲ P1 )) ⊳ a ⊲ mem(T ⊳ E ρ ⊲ P2 )) = membf (T ⊳ E ρ ⊲ P1 )) ⊳ a ⊲ membf (T ⊳ E ρ ⊲ P2 )) = P1 ⊳ a ⊲ P2 .
by IH ⊓ ⊔
Lemma 6.8. Let σ ∈ Au . For all P ∈ Cα(σ) , CPstat ⊢ P = stbf σ (P ). Proof. By Lemma 6.6, CPstat ⊢ P = T ⊳ E σ ⊲ P . By Theorem 5.9, CPstat ⊢ T ⊳ E σ ⊲ P = membf (T ⊳ E σ ⊲ P ), hence CPstat ⊢ P = stbf σ (P ). ⊓ ⊔ Theorem 6.9. Let σ ∈ Au . For all P, Q ∈ Cα(σ) , CPstat ⊢ P = Q ⇐⇒ stbf σ (P ) = stbf σ (Q). Proof. Assume CPstat ⊢ P = Q. Then, by Proposition 6.7, CPstat ⊢ stbf σ (P ) = stbf σ (Q), and by Proposition 6.2, CPst ⊢ stbf σ (P ) = stbf σ (Q). In [4] the following two statements are proved (Theorem 9.1 and an auxiliary result in its proof), where =st is a binary relation on CA : 1. For all P, Q ∈ CA , CPst ⊢ P = Q ⇐⇒ P =st Q. 2. For all st-basic forms P and Q, P =st Q ⇒ P = Q. By Proposition 6.7 these statements imply stbf σ (P ) = stbf σ (Q). Assume stbf σ (P ) = stbf σ (Q), and thus T ⊳ E σ ⊲ P =membf T ⊳ E σ ⊲ Q. By Theorem 5.11, CPmem ⊢ T ⊳ E σ ⊲ P = T ⊳ E σ ⊲ Q, and by Lemma 6.6 this implies CPstat ⊢ P = Q. ⊓ ⊔ Definition 6.10. Let σ ∈ Au . The binary relation =stbf,σ on Cα(σ) is defined as follows: P =stbf,σ Q ⇐⇒ stbf σ (P ) = stbf σ (Q). Theorem 6.11. Let σ ∈ Au . For all P, Q ∈ Cα(σ) , CPstat ⊢ P = Q ⇐⇒ P =stbf,σ Q. Proof. Assume CPstat ⊢ P = Q. By Lemma 6.8, CPstat ⊢ stbf σ (P ) = stbf σ (Q), and by Theorem 6.9, P =stbf,σ Q. Assume P =stbf,σ Q, and thus stbf σ (P ) = stbf σ (Q). By Theorem 6.9, CPstat ⊢ P = Q. ⊓ ⊔ Hence, the relation =stbf,σ is a congruence on Cα(σ) that is axiomatized by CPstat . With this observation in mind, we define a transformation on evaluation trees that mimics the function stbf σ , and prove that equality of two such transformed trees characterizes the congruence that is axiomatized by CPstat . 31
Definition 6.12. Let σ ∈ Au . The partial unary static evaluation function stse σ : Cα(σ) → TA yields static evaluation trees and is defined as follows: stse σ (P ) = memse(T ⊳ E σ ⊲ P ), where E σ is defined in Definition 6.5. We first give a simple example. Let P = (a ⊳ b ⊲ F) ⊳ a ⊲ T. We depict se(P ) at the left-hand side. The static evaluation tree stse ba (P ) is depicted in the middle, and the static evaluation tree stse ab (P ) is depicted at the right-hand side: a
a b a T
b
T
F
a
b
b
T
F
T
T
T
a T
F
T
F
The different static evaluation trees correspond to the different ways in which one can present truth tables for P , that is, the different possible orderings of the valuation values of the atoms occurring in P : a b (a ⊳ b ⊲ F) ⊳ a ⊲ T TT T TF F T FT FF T
b a (a ⊳ b ⊲ F) ⊳ a ⊲ T TT T TF T F FT FF T
The reason that stse σ (P ) is defined only for a particular σ ∈ Au is that in order to prove completeness of CPstat (and CPst ), we need to relate conditional statements that may contain different sets of atoms, such as for example a
and (T ⊳ b ⊲ T) ⊳ a ⊲ F,
(12)
which should then have equal static evaluation trees. With respect to example (12), appropriate static evalution trees for a need to contain b-nodes, such as for example (T ⊳ b ⊲ T) ⊳ a ⊲ (F ⊳ b ⊲ F) or (T ⊳ a ⊲ F) ⊳ b ⊲ (T ⊳ a ⊲ F). The similarities between stse σ and the function stbf σ can be exploited and lead to our final completeness result. Definition 6.13. Let σ ∈ Au . Static valuation congruence over σ, notation =stse,σ , is defined on Cα(σ) as follows: P =stse,σ Q ⇐⇒ stse σ (P ) = stse σ (Q). 32
The following characterization result immediately implies that for all σ ∈ Au , =stse,σ is a congruence relation on Cα(σ) (and thus justifies naming it as a congruence). Proposition 6.14. Let σ ∈ Au . For all P, Q ∈ Cα(σ) , P =stse,σ Q ⇐⇒ P =stbf,σ Q. Proof. We have to show memse(T ⊳ E σ ⊲ P ) = memse(T ⊳ E σ ⊲ Q) ⇐⇒ membf (T ⊳ E σ ⊲ P ) = membf (T ⊳ E σ ⊲ Q), and this immediately follows from Proposition 5.17.
⊓ ⊔
Theorem 6.15 (Completeness of CPstat ). Let σ ∈ Au . For all P, Q ∈ Cα(σ) , CPstat ⊢ P = Q ⇐⇒ P =stse,σ Q. Proof. Combine Theorem 6.11 and Proposition 6.14.
7
⊓ ⊔
Conclusions
In [4] we introduced proposition algebra using Hoare’s conditional x ⊳ y ⊲ z and the constants T and F. We defined a number of varieties of so-called valuation algebras in order to capture different semantics for the evaluation of conditional statements, and provided axiomatizations for the resulting valuation congruences: CP (four axioms) characterizes the least identifying valuation congruence we consider, and the extension CPmem (one extra axiom) characterizes the most identifying valuation congruence below propositional logic, while static valuation congruence, axiomatized by adding the simple axiom F ⊳ x ⊲ F = F to CPmem , can be seen as a characterization of propositional logic. In [3, 5] we introduced an alternative valuation semantics for proposition algebra in the form of Hoare-McCarthy algebras (HMAs) that is more elegant than the semantical framework provided in [4]: HMA-based semantics has the advantage that one can define a valuation congruence without first defining the valuation equivalence it is contained in. In this paper, we use Staudt’s evaluation trees [14] to define free valuation congruence as the relation =se (see Section 2), and this appears to be a relatively simple and stand-alone exercise, resulting in a semantics that is elegant and much simpler than HMA-based semantics [3, 5] and the semantics defined in [4]. By Theorem 2.9, =se coincides with “free valuation congruence as defined in [4]” because both relations are axiomatized by CP (see [4, Thm.4.4 and Thm.6.2]). The advantage of “evaluation tree semantics” is that for a given conditional statement P , the evaluation tree se(P ) determines all relevant evaluations, so P =se Q is determined by evaluation trees that contain no more atoms than those that occur in P and Q, which corresponds to the use of truth tables in propositional logic. In Section 3 we define repetition-proof valuation congruence on CA by P =rpse Q ⇐⇒ rpse(P ) = rpse(Q), where rpse is a transformation function from evaluation trees to repetition-proof evaluation trees. It is obvious that the transformation rpse(P ) is “natural”, given the axiom schemes (CPrp1) and (CPrp2) that are characteristic for CPrp (A). The equivalence on CA that we want to prove is CPrp (A) ⊢ P = Q ⇐⇒ P =rpse Q, (13) 33
and this equivalence implies that =rpse coincides with “repetition-proof valuation congruence as defined in [4]” because both are axiomatized by CPrp (A) (see [4, Thm.6.3]). However, equivalence (13) implies that =rpse is a congruence relation, and we could not find a direct proof of this fact and chose to simulate the transformation rpse by the transformation rpbf on conditional statements, and to prove that the resulting equivalence relation =rpbf is a congruence relation. The fact that =rpbf is an appropriate congruence relation follows from Theorem 3.11, that is, For all P, Q ∈ CA , CPrp (A) ⊢ P = Q ⇐⇒ P =rpbf Q (the proof of which depends on [4, Thm.6.3]), and from Theorem 3.9, that is, For all P ∈ CA , CPrp (A) ⊢ P = rpbf (P ). In order to prove equivalence (13), which is Theorem 3.18, it is thus sufficient to prove that =rpbf and =rpse coincide, and this is Proposition 3.17). The structure of our proofs on the axiomatizations of the other valuation congruences that we consider is very similar, although the case for static valuation congruence requires a slightly more complex proof (below we return to this point). Moreover, these axiomatizations are incremental: the axiom systems CPrp (A) up to and including CPstat all share the axioms of CP, and each succeeding system is defined by the addition of either one or two axioms, in most cases making previously added axiom(s) redundant. Given some σ ∈ Au , this implies that in Cα(σ) , =se ⊆ =rpse ⊆ =crse ⊆ =memse ⊆ =stse,σ , where all these inclusions are proper if σ 6= ǫ, and thus α(σ) 6= ∅, and thus A 6= ∅. We conclude that repetition-proof evaluation trees and the valuation congruence =rpse provide a full-fledged, simple and elegant semantics for CPrp (A), and that this is also the case for contractive evaluation trees and the valuation congruence =crse , and memorizing evaluation trees and the valuation congruence =memse. Static valuation congruence over Cα(σ) for some σ ∈ Au , coincides with any standard semantics of propositional logic in the following sense: P =stse,σ Q
if, and only if,
P ↔ Q is a tautology in propositional logic,
where P and Q refer to Hoare’s definition [11]: x ⊳ y ⊲ z = (x ∧ y) ∨ (¬y ∧ z),
a = a,
T = T,
F = F.
Let a ∈ A. The fact that =stse,a identifies more than =memse is immediately clear: F ⊳ a ⊲ F =stse,a F, while it is easy to see that F ⊳ a ⊲ F 6=memse F. Our proof that CPstat , and thus CPst is an axiomatization of static valuation congruence is slightly more complex than those for the other axiomatizations because upon the evaluation of two conditional statements, there is not necessarily a canonical order for the evaluation of their atoms, and therefore such an ordering as encoded by a static evaluation tree should be fixed beforehand. To this purpose, we use some σ ∈ Au . 34
A spin-off of our approach can be called “basic form semantics for proposition algebra”: for each valuation congruence C considered, two conditional statements are C-valuation congruent if, and only if, they have equal C-basic forms, where C-basic forms are obtained by a syntactic transformation of conditional statements, which is a form of normalization. We conclude with a brief digression on short-circuit logic, which we defined in [6] (see [5] for a quick introduction). Familar binary connectives that occur in the context of imperative programming and that prescribe short-circuit evaluation, such as && (sometimes called “logical AND”), are often explained with help of the conditional: P && Q =def if P then Q else false, so P && Q =def Q ⊳ P ⊲ F, and ¬P =def F ⊳ P ⊲ T. Short-circuit logic focuses on the question Which are the logical laws that characterize short-circuit evaluation of binary propositional connectives?
(14)
A first approach to this question is to adopt the conditional as an auxiliary operator, as is done in [5, 6], and to analyze this question in the setting of an appropriate valuation congruence (or several valuations congruences if one wishes to consider “mixed conditional statements”). An alternative approach to question (14) is to establish axiomatizations for short-circuited binary connectives in which the conditional is not used. With respect to memorizing valuation congruence, this is done in [6] where we exploit the fact that modulo this congruence, the conditional can be expressed with short-circuited binary connectives. For free valuation congruence, an equational axiomatization of short-circuited binary propositional connectives (in which the conditional is not used) is provided by Staudt in [14], where se(P && Q) =def se(P )[T 7→ se(Q)] and se(¬P ) =def se(P )[T 7→ F, F 7→ T], and where the associated completeness proof is based on decomposition properties of evaluation trees. Some applications and examples based on proposition algebra and the valuation congruences discussed in this paper are described in [6], and We give an example on the use of CPrp (A), taken from [6, Ex.4]. Example 7.1. Consider simple arithmetic expressions over the natural numbers (or the integers) and a program notation for imperative programs or algorithms in which each atom is either a test (n==e) with e some arithmetical expression, or an assignment (n=e). Assume that assignments when used as conditions always evaluate to true (next to having their intended effect). Then, these atoms satisfy the axioms of CPrp (A). (Of course, CPrp (A) does not characterize all equations that are valid with respect to this particular example, e.g., (0==0) = T is not derivable from CPrp (A)). Let the connective && be defined by P && Q = Q ⊳ P ⊲ F. Then the assignment (n=n+1) clearly does not satisfy the contraction law a && a = a, that is, (T ⊳ a ⊲ F) ⊳ a ⊲ F = T ⊳ a ⊲ F, because ((n=n+1) && (n=n+1)) && (n==2) and (n=n+1) && (n==2) can yield different evaluation results. Hence, we have a clear example of the repetition-proof characteristic of CPrp (A) that does not satisfy the axioms of CPcr (A). This example is related to the work of Lars Wortel [15], in which a comparable instance of Propositional Dynamic Logic [8, 7] (PDL) is investigated. Note that in such a simple instance of PDL, it is natural to assume that assignments (as atoms) always evaluate to true because it is natural to assume that they always succeed. 35
For repetition-proof and contractive valuation congruence, finite axiomatizations for shortcircuited binary propositional connectives in which the conditional is not used, are not yet found and it is an open question whether such axiomatizations exist. It may very well be the case that “evaluation trees for proposition algebra” is a suitable point of departure for further analysis of question (14) with respect to these valuation congruences. We finally note that all valuation congruences considered in this paper can be used as a basis for systematic analysis of the kind of side effects that may occur upon the evaluation of short-circuited connectives as in Example 7.1, and we quote these words of Parnas [13]: “Most mainline methods disparage side effects as a bad programming practice. Yet even in well-structured, reliable software, many components do have side effects; side effects are very useful in practice. It is time to investigate methods that deal with side effects as the normal case.”
References 1. Bergstra, J.A., Bethke, I., and Rodenburg, P.H. (1995). A propositional logic with 4 values: true, false, divergent and meaningless. Journal of Applied Non-Classical Logics, 5(2):199-218. 2. Bergstra, J.A. and Loots, M.E. (2002). Program algebra for sequential code. Journal of Logic and Algebraic Programming, 51(2):125-156. 3. Bergstra, J.A. and Ponse, A. (2010). On Hoare-McCarthy algebras. Available at http://arxiv. org/abs/1012.5059 [cs.LO]. 4. Bergstra, J.A. and Ponse, A. (2011). Proposition algebra. ACM Transactions on Computational Logic, Vol. 12, No. 3, Article 21 (36 pages). 5. Bergstra, J.A. and Ponse, A. (2012). Proposition algebra and short-circuit logic. In F. Arbab and M. Sirjani (eds.), Proceedings of the 4th International Conference on Fundamentals of Software Engineering (FSEN 2011), Tehran. Volume 7141 of Lecture Notes in Computer Science, pages 15-31. Springer-Verlag. 6. Bergstra, J.A. Ponse A., and Staudt, D.J.C. (2013). Short-circuit logic. Available at arXiv:1010. 3674v4 [cs.LO,math.LO], 18 Oct 2010; this version (v4): 12 Mar 2013. 7. Eijck, D.J.N. van, and Stokhof M.J.B. (2006). The gamut of dynamic logics. In: D. Gabbay and J. Woods (eds.), Handbook of the History of Logic, Volume 7, pages 499-600. Elsevier. 8. Harel, D. (1984). Dynamic logic. In: D. Gabbay and F. G¨ unthner (eds.), Handbook of Philosophical Logic, Volume II, pages 497-604. Reidel Publishing Company. 9. Hayes, I.J., He Jifeng, Hoare, C.A.R., Morgan, C.C., Roscoe, A.W., Sanders, J.W., Sorensen, I.H., Spivey, J.M., and Sufrin B.A. (1987). Laws of programming. Communications of the ACM, 3(8):672686. 10. Hoare, C.A.R. (1985). Communicating Sequential Processes. Prentice-Hall, Englewood Cliffs. 11. Hoare, C.A.R. (1985). A couple of novelties in the propositional calculus. Zeitschrift f¨ ur Mathematische Logik und Grundlagen der Mathematik, 31(2):173-178. 12. Hoare, C.A.R., and Jones, C.B. (1989). Essays in Computing Science. Prentice-Hall, Englewood Cliffs. 13. Parnas, D.L. (2010). Really Rethinking ‘Formal Methods’. Computer, 43(1):28-34, IEEE Computer Society (Jan. 2010). 14. Staudt, D.J.C. (2012). Completeness for two left-sequential logics. MSc. thesis Logic, University of Amsterdam (May 2012). Available at arXiv:1206.1936v1 [cs.LO]. 15. Wortel, L. (2011). Side effects in steering fragments. MSc. thesis Logic, University of Amsterdam (September 2011). Available at arXiv:1109.2222v1 [cs.LO].
36