A generalization of Twist-Structures Semantics for n-valued logics V´ıctor Fern´andez, Carina Murciano Basic Sciences Institute (Mathematical Area) Philosophy College National University of San Juan (UNSJ) San Juan, Argentina E-mail:
[email protected],
[email protected]
Abstract In this paper we show a generalization of Twist-Structures Semantics which allows, under certain conditions, to express consequence relations defined by abstract finite matrices, in an alternative way. This general construction (called Discriminant Structures Semantics here) is based on the existence of a discriminant pair, to be defined. A number of motivating examples are shown, and some technical results are demonstrated. Besides that, we compare this technique with other already existing ones.
1
Preliminaries
The currently called Twist-Structures Semantics were developed by M. Fidel and D. Vakarelov (see [3] and [12]) to obtain an alternative presentation of the algebraic semantics for the Strongly Intuitionistic Nelson’s Logic N, characterized by a special connective ∼: the strong negation. In its original application, Twist-Structures Semantics consists of a class of algebras whose supports are sets S ⊆ H × H ∗ . Here, H is a Heyting algebra, meanwhile H ∗ is its dual (in the underlying order). The “torsion” of the second set allows to interpret in a more faithful way the behavior of ∼. Besides that, the implication connective ⊃ of N is related with the natural order of H × H ∗ , too. Twist-Structures Semantics were applied to other logics different from N (see [4], or [6]). In these new applications, a basic idea is preserved: twist-structures are just new (more intuitive) representations for previously existent algebras. A somewhat different approach to this semantics was given in [5] and [7], wherein twist-structures provide a new characterization of matrix logics (in particular, of n-valued ones). The underlying idea here is not focused on the algebraic (or
1
lattice-theoretic) characterization of the supports of these structures, but on other considerations. Between them: • The elements of the structures to be defined are pairs (or, generally, t-uples). • Since the logics considered are defined by n-valued matrices, Twist-Structures Semantics discriminates the involved truth-values, in a certain sense. • In the considered logics, a negation connective ¬ allows this discrimination. On the other hand, the process of “torsion” in some axes of the twist-structures is used in the mentioned works, as well (obviously, this is the reason of the name twist-structure). In the present article we intend to give a generalization of the mentioned construction, in such a way that it can be applied, eventually, to arbitrary n-valued logics (despite the presence of negation connectives). As we will see, the central point here is the separation of the truth-values of the logics to be considered. The generalization here proposed will be called Discriminant Structures Semantics, or D.S.S, for short. In addition, besides the definition of this construction, some abstract technical results will be proved, and we will see some examples of this semantics, too. Briefly, the organization of the topics treated in this article, is as follows: after a minimal fixation of definitions and notation about Abstract Logic, an example of Twist-Structures Semantics (already presented in the literature) will be shown in the next section. This motivates the definition of D. S. S., exemplified by the logic L?, in Section 3. In the same section will be studied in which way an abstract version of Discriminant Structures Semantics can be defined. This discussion will be extended (in a more technical way) in Section 4. We will show here that the key of our construction is the existence of a discriminant pair. By the way, this existence is not a trivial fact, because not every matrix can define such a pair, as a counter-example shows, at the end of the section. Considering that discriminant structures have a certain algebraic character, we will relate our construction with some basic ideas of Abstract Algebraic Logic. It will be proved, in Section 5, that Discriminant Structures Semantics can be defined, even in the case of non-algebraizable logics. This is done by means of examples of logics that admit D.S.S. but are not protoalgebraic (nor algebraizable, therefore). The last two sections are focused on comparisons of D.S.S. with other constructions of the same type. In particular, Section 6 relates discriminant structures with Dyadic Semantics (another construction, defined in [1], which shares the “same spirit” of our generalization). Finally, the last section is based on TwistStructures Semantics again, comparing it with Discriminant Structures Semantics. So, in this way, the present article is placed in context. We prevent here to the reader that the Sections 3 and 4 are strongly related and are similar in a certain sense. But, meanwhile the former deals with D.D.S by means of an example, the latter applies the previously sketched ideas to give a formal treat2
ment to our construction. We have chosen to take the risk of being redundant, in benefit to a more intuitive approach to D. S. S. With respect to the basic notation and definitions that will be used in this paper, please take into account that we are studying different ways of definition of a same consequence relation. So, we chose to use the traditional formalism of Abstract Logic, applied mainly to the particular case of consequence relations defined by matrices. For that, we are based mainly in [2], with some little notational changes, when necessary. Definition 1.1. We denote by ω = {0, 1, 2...} the set of natural numbers; let V be a countable set, called the set of atomic formulas, fixed from now on. The elements of V are denoted by p, q, r . . ., with subscripts if necessary. With this in mind we define a signature as a pair (C, ρ), where C is a set of symbols (named connectives), and ρ : C−→ω. For every c ∈ C, ρ(c) is the arity of c. Often we denote any signature just by C, if there is no risk of confusion. For every signature C, the propositional language generated by C (denoted by L(C)) is the absolutely free algebra, generated by C over V. Besides that, a C-matrix is a pair M = (A, D), where A is an algebra, similar to L(C), being D ⊆ A the set of designated values of M . For every connective c, its associated operation in A will be called the truth-function in A, associated to c. In general terms, every operation f : An −→A will be called an A-truth-function1. Note that, by simplicity, we will identify an algebra with its universe. Besides, we identify the sets of truth-functions in A, associated to the connectives of C, with C itself. This notational abuse will be sustained all along the paper. Every C-matrix defines a consequence relation in L(C), as usual: Definition 1.2. Let M = (A, D) be a C-matrix. A M -valuation is a homomorphism v : L(C) → A. The consequence relation induced by M is |=M , defined in the following way: Γ |=M α iff, for each valuation v : L(C)−→A, if v(Γ) ⊆ D, then v(α) ∈ D. We say that α is tautology (relatively to M ) iff ∅ |=M α (denoting this as |=M α). The logic induced by M is the pair L = (C, |=M ). If the domain of a C-matrix M is finite we will say that M is a n-valued matrix and, by extension, that L = (C, |=M ) is a n-valued logic. This definition can be generalized to classes: if K is a class of C-matrices, the consequence relation |=K is given by: Γ |=K α iff Γ |=M α for every M in K. Definition 1.3. Given two C-matrices M1 = (A1 , D1 ) and M2 = (A2 , D2 ), we say that h : A1 −→A2 is a matrix homomorphism from M to N 2 iff verifies: (1) h is homomorphism (in the algebraic sense) from A to B. (2) h(D1 ) ⊆ D2 . Two matrices M1 and M2 are isomorphic iff exists a matrix homomorphism h 1 The
truth-functions not depend on M , but on A. is convenient to distinguish between valuations (homomorphisms from L(C), to the support of the C-matrices), cf. Definition 1.2, and matrix homomorphisms. 2 It
3
verifying additionally: (a) h is a bijective function. (b) h is an strict homomorphism. That is: h(A1 − D1 ) ⊆ A2 − D2 . The symbol ∼ = denotes isomorphism (between matrices or algebras, depending on the context). Please note that, if M1 and M2 are isomorphic C-matrices, then |=M1 = |=M2 . Matrices are one of the simplest ways to furnish consequence relations and, therefore, abstract logics, whose formal definition is the following: Definition 1.4. An abstract logic is a pair (CL , `L ), where C is a signature and `L ⊆ ℘(L(C))×L(C), satisfying, for every Γ∪{α} ⊆ L(C), extensiveness, monotonicity and transitivity (we omit these well-known definitions). Given a logic L = (CL , `L ), if there is a CL-matrix M such that `L = |=M , we will say that L is a matrix logic, and we will use simply |=M , instead of `L . Please recall that a matrix logic is always structural (i.e. closed by substitutions). In addition, if M is finite, then |=M verifies finitariness (we omit its definition, too). We conclude this section with some comments about notation: I.H. means “Induction Hypothesis”, as usual. The symbol πi denotes the ith projection of a t-uple. For a formula α = α(x1 , . . . , xn), the expression α(x1 /β1 , . . . , xn /βn ) denotes the uniform substitution, in α, of the variables xi by the formulas βi . If there is not risk of confusion, the expression α(x1 /β1 , . . . , xn /βn ) will be denoted simply by α(β1 , . . . , βn ). With respect to notations related to algebras: if B is any algebra, the symbol ~a denotes t-uples: ~a:=(a1 , . . . , at ) ∈ B t . Besides that, we will use the “classical” two-elements boolean algebra widely throughout this paper. This algebra, as usual, will be denoted as 2 = {0, 1}. The 2-truth-functions will be called boolean ones. Finally, the formal equational language that “will talk about algebras”3 is built on the basis of L(C) (considering its elements as terms, in this context). This implies that the connectives of C are considered as symbols of functions, in this case. Finally, the symbol ≈ will denote the equality predicate of equational languages.
2
An example of Twist-Structures Semantics for n-valued logics
In order to understand the behavior of Twist-Structures Semantics in the context of n-valued logics, consider the following example (taken from [7]), applied to the weakly-intuitionistic logic I 1 P 0 , defined in [9]. This logic has as signature the set C[1,0] = {¬, ⊃} (with ρ(¬) = 1, ρ(⊃) = 2), and its definition is the following: 3 In particular, we will use equational languages to express in a formal way some boolean equations. Hence, the underlying set of function symbols will be {∧, ∨, −, 0, 1}, with obvious arities.
4
Example 2.1. The logic I 1 P 0 = (C[1,0], |=M[1,0] ) is defined by means of the C[1,0] -matrix M[1,0] = ({F0 , F1 , T0 }, {T0 }), wherein the truth-functions ¬ and ⊃ are indicated in the tables below: ⊃
F0
F1
T0
¬
F0 F1 T0
T0 T0 F0
T0 T0 F0
T0 T0 T0
F0 F1 T0
T0 F0 F0
Just for a better understanding of M[1,0] , the truth-values F0 and T0 are classical truth and falsehood, respectively; on the other hand F1 is an “intermediate value of falsehood”. Considering the definition of I 1 P 0 , it is possible to characterize |=M[1,0] using twist-structures. Its definition (taken from [7]) is given in the sequel: Definition 2.2. For every boolean algebra (B, ∨B , ∧B , −B , 1B , 0B ) 4 , the twiststructure associated to B is the algebra R[1,0](B) = (R[1,0](B), ¬, ⊃), where: (1) Its support is R1,0 (B) = {(x0 , x1 ) ∈ B × B ∗ : x0 ∧B x1 = 0B }, where B ∗ is the dual algebra (in its order) of B. (2) ¬(x0 , x1 ) = (x1 , −B x1 ). (3) (x0 , x1 ) ⊃ (y0 , y1 ) ∈ R1,0 (B) = (x0 →B y0 , −B (x0 →B y0 )). The class of all the twist-structures of type (1, 0) will be denoted by T[1,0]. It is easy to prove that the operations indicated above make sense. On the other hand, the operations in B and B ∗ can be mutually defined. For instance, if (x0 , x1 ), (y0 , y1 ) ∈ R1,0 (B), then ((x0 , x1 ) ⊃ (y0 , y1 ))0 := π0 ((x0 , x1 ) ⊃ (y0 , y1 )) = x0 →B y0 . Also, ((x0 , x1) ⊃ (y0 , y1 ))1 := π1 ((x0 , x1) ⊃ (y0 , y1 )) = −B (x0 →B x1 ) = x0 ∧B −B x1 = x0 ∨B∗ −B∗ x1 = x1 →B∗ x0 . For the sake of simplicity we choose to express all the operations in R1,0(B) in terms of B. So, B ∗ just remarks the behavior of the negations of any formula, suggesting that the order in the elements of B ∗ is inverse to the order in B. Definition 2.3. Let R[1,0](B) be the twist-structure of type (1, 0) associated to B. A R[1,0] (B)-valuation is any homomorphism w : V−→R[1,0](B). Now, the consequence relation |=T[1,0] defined in L(C) is given by: Γ |=T[1,0] α iff, for every twist-structure R[1,0] (B) of T[1,0] , and for every R[1,0] (B)-valuation w is valid that: If w(Γ) = {(1B , 0B )} then v(α) = (1B , 0B ). In addition, we say that α is tautology (with respect to T[1,0] ) iff ∅ |=T[1,0] α (denoted, as usual, by |=T[1,0] α). 4 From now on, the subscript of the operations of B will be dropped if there is not risk of confusion.
5
Remark 2.4. every twist-structure R[1,0](B) in T[1,0] is, in fact, a C[1,0]-matrix whose designated set is {(1B , 0B )}. So, the relation |=T[1,0] can be understood as the consequence relation defined by a class of C[1,0]-matrices (recall Definition 1.2), which is T[1,0] . Two essential facts about Twist-Structures Semantics for I 1 P 0 are the following: • In T[1,0] exists a canonical twist-structure M 0 , which is isomorphic to M[1,0] . • Moreover, if 6|=T[1,0] α, then 6|=M 0 α. These facts (that will be analyzed in more detail later) imply this result: Theorem 2.5. |=M[1,0] α iff |=T[1,0] α, for every α ∈ L(C[1,0]). Which are the conditions that allowed us to obtain a Twist-Structures Semantics for I 1 P 0 ? At first sight, we note that the connective ¬ was widely used in the definition of the elements of every R[1,0] (B). So, a natural question is if this kind of construction can be applied to any abstract logic, with or without negation. For that, note that the truth-values of I 1 P 0 can be separated (or discriminated) by means of successive iterations of ¬ (this is done in a somewhat hidden way). We will discuss this fact in the next section.
3
A generalization: Semantics of Discriminant Structures
To motivate the definition that will allow us to deal with the generalization suggested in the previous section, consider the following example: Example 3.1. Let L? be the {⊃, ¬}-fragment of the well-known Lukasiewicz three-valued logic L3 , but considering { 12 } as a designated value. Formally speaking, L? = (CL? , |=M ? ), where CL? = {⊃, ¬} (with obvious arities) and |=M ? is defined by M ? = (A? , D? ), with A? = {0, 12 , 1} and D? = { 21 }. In M ? , the truth-functions associated to ⊃ and ¬ are: ⊃
0
1 2
1
¬
0
1
1
1 2
1 2
1 2
0
1 2
1 1 1
0
1 2
1 1
1
1
0
Of course, even when the truth-values (and, moreover, the operations) are the same as in the matrix that defines L3 , the change in the set of designated values produces different tautologies in both logics. In fact, we will prove later that L? has not tautologies at all. Also, an adequate Twist-Structures Semantics for L? will be constructed later. For that, we use some basic and well-known facts about Algebraic Logic (see [8]). Besides that, recall that we consider a twist-structure for a logic L = (CL , `L ) simply as a particular CL -matrix. We apply this idea in the following definition. 6
Definition 3.2. Given a boolean algebra (B, ∨, ∧, −, 1, 0), the Discriminant Structure (associated to B) is the following CL? -matrix: R(B) = ((R(B), ¬, ⊃), {(1, 0)}), where: (1) R(B) = {(x0 , x1) ∈ B 2 : x0 ∧ x1 = 0}. (2) (x0 , x1 ) ⊃ (y0 , y1 ) := − x1 ∧ [(x0 ↔ −y0 ) ∧ (x0 ↔ y1 )], −(x0 ∨ x1 ∨ y0 ) ∧ y1 (3) ¬(x0 , x1 ) := x0 , −(x0 ∨ x1 ) . The class of all the discriminant structures for L? is denoted by D? . In addition, we define the consequence relation given by the class of discriminant structures D? as the matrix consequence relation |=D? ⊆ ℘(L(CL? )) × L(CL? ). That is, Γ |=D? α iff, for every discriminant structure R(B), it is valid that, for every R(B)-valuation5 w : L(CL? )−→R(B), w(Γ) ⊆ {(1, 0)} implies w(α) = (1, 0). About the previous definition it can be easily proved that: Proposition 3.3. The operations ⊃ and ¬ are well defined. That is, every set R(B) is closed by applications of ⊃ and ¬. Definition 3.4. The canonical discriminant structure for L? is R(2). According to Definition 3.2, we have that, in this structure, the functions ⊃ and ¬ behave as depicted in the following tables. ⊃
(0, 1) (1, 0) (0, 0)
¬
(0, 1) (0, 0) (0, 0) (0, 0) (1, 0) (1, 0) (0, 0) (0, 0) (0, 0) (0, 1) (1, 0) (0, 0)
(0, 1) (0, 0) (1, 0) (1, 0) (0, 0) (0, 1)
It should be clear that R(2) is a matrix isomorphic to M ?, cf. Definition 1.3. Besides that, it is easy to prove: Proposition 3.5. The discriminant structures of L? verify: (a) If B1 and B2 are isomorphic boolean algebras, then R(B1 ) and R(B2 ) are isomorphic matrices. (b) Considering every R(B)-valuation w : L(CL? )−→R(B) as a pair of (nonhomomorphical) functions: w = (w0 , w1 ) (where wi :=πi ◦ w) then, for every valuation w : L(CL? )−→R(A), w(α) = (1, 0) iff w0 (α) = 1. On the other hand, the basic tools taken from Algebraic Logic that will be used in the sequel are related to prime filters. We summarize the needed results about it: Proposition 3.6. Let B be any boolean algebra. Then: (a) For every a 6= 1, exists a prime filter ∇ such that a ∈ / ∇. (b) For every prime filter ∇ in B, the binary relation ≡∇ defined by: x ≡∇ y iff {x → y, y → x} ⊆ ∇ is a congruence, and its quotient B/∇ = {∇, ∆} is isomorphic to the boolean algebra 2 (being ∆:=B − ∇ the prime ideal induced by ∇). 5 That
is, for every homomorphism w from L(CL? ) to R(B).
7
For Propositions 3.5(a) and 3.6(b) we have: Corollary 3.7. For every boolean algebra B, for every prime filter ∇ ⊆ B, R(B/∇) ∼ = R(2) ,where the operations in R(B/∇) are given as in Definition 3.4 (replacing the element 1 by ∇ and the 0 by ∆, of course). In the sequel we will prove that |=D? α iff |=M ? α. For that, we will relate the canonical structure to prime filters of boolean algebras. First of all: Proposition 3.8. [Trichotomy]: Let B be any boolean algebra, and let ∇ be any prime filter of B. Then, for every pair (x0 , x1 ) ∈ R(B), is valid one and only one of the following conditions: - x0 ∈ ∇ and x1 ∈ ∆. - x0 ∈ ∆ and x1 ∈ ∆. - x0 ∈ ∆ and x1 ∈ ∇. Proof: Straightforward, from the definitions of R(B) and prime filters.
2
Proposition 3.9. Every prime filter ∇ of a boolean algebra B induces a matrix epimorphism e∇ : R(B)−→R(B/∇) such that e∇ (x0 , x1 ) = (∇, ∆) iff x0 ∈ ∇. Proof: Define (for every (x0 , x1) ∈ R(B)) e∇ (x0 , x1 ) := (x0 , x1) (where xi is the equivalence class of xi in B/∇). By Proposition 3.8, e∇ is a well-defined surjective function. Besides, x0 ∈ ∇ iff x0 = ∇ iff (x0 , x1) = (∇, ∆) (again, by Proposition 3.8), iff e∇ (x0 , x1 ) = (∇, ∆). This also implies that e∇ (1, 0) = (∇, ∆), verifying thus the preservation of designated values. So, we just need to prove that e∇ is a homomorphism. That is: (A): e∇ (¬(x0 , x1)) = ¬(e∇ (x0 , x1 )). Consider these cases (by Proposition 3.8): Case A.1: x0 ∈ ∇, x1 ∈ ∆. So, e∇ (x0 , x1 ) = (∇, ∆), and then ¬(e∇ (x0 , x1 )) = (∇, ∆), too (by Corollary 3.7). Besides that, e∇ (¬(x0 , x1)) = e∇ (x0 , −(x0 ∨ x1)) = (x0 , −x0 ∨ x1 ). Realizing that x0 ∈ ∇ implies (x0 ∨ x1 ) ∈ ∇ (which implies −(x0 ∨ x1 ) ∈ ∆), we have that e∇ (¬(x0 , x1 )) = (∇, ∆), too. Case A.2: x0 ∈ ∆, x1 ∈ ∆. Case A.3: x0 ∈ ∆, x1 ∈ ∇. Proceed as in Case A.1. (B): e∇ ((x0 , x1 ) ⊃ (y0 , y1 )) = e∇ (x0 , x1 ) ⊃ e∇ (y0 , y1 ). In our (combinatorial) proof we should consider nine possibilities. However, by the definition of the connective ¬ in R(B/∇) (see Corollary 3.7), we can consider simply six cases: Case B.1: x0 ∈ ∆, x1 ∈ ∇. Case B.2: y0 ∈ ∆, y1 ∈ ∆. Case B.3: x0 ∈ ∇, x1 ∈ ∆; y0 ∈ ∆, y1 ∈ ∇. Case B.4: x0 ∈ ∇, x1 ∈ ∆; y0 ∈ ∇, y1 ∈ ∆; Case B.5: x0 ∈ ∆, x1 ∈ ∆; y0 ∈ ∆, y1 ∈ ∇. Case B.6: x0 ∈ ∆, x1 ∈ ∆; y0 ∈ ∇, y1 ∈ ∆. For the analyisis of all these possibilities, as in (A), we use repeatedly Corollary 3.7, as the definition of the operation ⊃ in every R(B). Also, we use basic properties of prime filters and ideals without explicit mention. We will give the demonstrations of cases B.1 and B.6 as examples (the other ones are similar): Case B.1: x0 ∈ ∆, x1 ∈ ∇. So, e∇ (x0 , x1 ) ⊃ e∇ (y0 , y1 ) = (∆, ∆) (the behaviors of y0 and y1 are not important here). From this, we have −x1 ∈ ∆ and then 8
−x1 ∧[(x0 ↔ −y0 )∧(x0 ↔ y1 )] ∈ ∆ (∗). On the other hand, x0 ∨x1 ∨y0 ∈ ∇, and so −(x0 ∨ x1 ∨ y0 ) ∈ ∆. Hence, −(x0 ∨ x1 ∨ y0 ) ∧ y1 ∈ ∆ (∗∗). From (∗) and (∗∗), e∇ ((x0 , x1 ) ⊃ (y0 , y1 )) = (−x1 ∧ [(x0 ↔ −y0 ) ∧ (x0 ↔ y1 )], −(x0 ∨ x1 ∨ y0 ) ∧ y1 ) = (∆, ∆). Case B.6: x0 ∈ ∆, x1 ∈ ∆; y0 ∈ ∇, y1 ∈ ∆. Then, e∇ (x0 , x1 ) ⊃ e∇ (y0 , y1 ) = (∇, ∆). Besides that, note that −y0 → x0 = y0 ∨ x0 ∈ ∇, and x0 → −y0 = −x0 ∨ −y0 ∈ ∇, too. So, x0 ↔ −y0 ∈ ∇. In a similar way, y1 → x0 ∈ ∇, x0 → y1 ∈ ∇, and then x0 ↔ y1 ∈ ∇. So, −x1 ∧ [(x0 ↔ −y0 ) ∧ (x0 ↔ y1 )] ∈ ∇. Finally, x1 ∨ x0 ∨ y0 ∈ ∇ (since y0 ∈ ∇), which implies −(x0 ∨ x1 ∨ y0 ) ∈ ∆. Thus, −(x0 ∨ x1 ∨ y0 ) ∧ y1 ∈ ∆ and, therefore, e∇ ((x0 , x1 ) ⊃ (y0 , y1 )) = (∇, ∆). As was said, the rest of the cases are proved in a similar way. And, since e∇ is a homomorphism, the proof of the theorem is completed. 2 Theorem 3.10. For every α ∈ L(CL? ), |=M ? α iff |=D? α. Proof: On one hand, since M ? is isomorphic to R(2), we get that |=D? α implies |=M ? α. For the reciprocal, suppose that exists a boolean algebra B and a homomorphism w : L(CL? )−→R(B) such that w(α) 6= (1, 0). Then, w0 (α) 6= 1B (recall Proposition 3.5(b)). Hence, there is a prime filter ∇ in B with w0 (α) ∈ / ∇, by Proposition 3.6(a). From this and Proposition 3.9, there is a matrix epimorphism e∇ : R(B)−→R(B/∇), with e∇ (w(α)) 6= (∇, ∆) (because w0 (α) ∈ / ∇). So, e∇ ◦w : L(CL? )−→R(B/∇) is a R(B/∇)-valuation that verifies (e∇ ◦ w)(α) 6= (∇, ∆), the designated value or R(B/∇). Since this matrix is isomorphic to R(2) (and therefore isomorphic to M ?), we have 6|=M ? α. This concludes the proof. 2 Based on the construction for L? given in the previous example (and motivated by Example 2.1 too), let us try to explain, using informal terms, the process to would allows to find a Discriminant Structures Semantics for a given matrix logic. For that, consider as a starting point a signature C, and a C-matrix M = (A, D). In addition, take in account the function h : A−→2 defined as: h(x) = 1 iff x ∈ D (that is, h = χD , relatively to A). Also, we will abbreviate the composition of truth-functions f ◦ · · · ◦ f (k times) by f k . Of course, if k = 0, then f k = id. With these conventions, the basis of our previous construction is the existence of a discriminant pair for M , which is defined in the sequel. Definition 3.11. Given a C-matrix M = (A, D), a discriminant pair for M is a pair (β, ~a) , where β = β(p, q1 , . . . , qm ) ∈ L(C), ~a = (a1 , . . . , am ) ∈ Am and the A-truth-function f(β,~a) (x):=β(x, ~a), is discriminant (by iterations). That is, there is k ∈ ω such that the boolean function hk : A → 2k+1 is injective, being 2 k hk (x) := [h(x), h(f(β,~a) (x)), h(f(β,~ a) (x)), . . . , h(f(β,~ a) (x))]. With the previous definition in mind, the sketch of a construction of a suitable discriminant structure for an arbitrary C-matrix M = (A, D) is given below: 1) Find, for M , a discriminant pair (β, ~a). 2) Of course, if such a pair exists, we can define a matrix M 0 ∼ = M , where M 0 = (hk (A), hk (D)), and the truth-functions of M 0 are defined “copying” the 9
behavior of the truth-functions of M . M 0 will be considered the canonical discriminant structure. 3) Characterize hk (A) (considering that it is a subset of 2k+1 ), by means of the lattice-theoretical behavior of its components. 4) Explain the truth-functions of M 0 , associated to every connective c ∈ C, again considering the lattice-theoretical properties used in the truth-function c, in the matrix M 0 . 5) Define a class of subsets of algebras that generalize hk (A). This class is formed by C-algebras R(B), with its domain R(B) ⊆ B k+1 (where the algebras B are ranging over a certain fixed class). In every algebra R(B), the operations c ∈ C would be defined according to the considerations of 4). On the other hand, for every R(B), consider a special subset D(B), which is, of course, the generalization of hk (D). Every pair (R(B), D(B)) will be called a discriminant structure and is, actually, a C-matrix. In addition, define the class DM , constituted by all the discriminant structures induced by M . 6) Finally, define the relation |=DM ⊆ ℘(L(C)) × L(C) as usual: Γ |=DM α iff, for every discriminant structure (R(B), D(B)), for every homomorphism w : L(C)−→R(B), w(Γ) ⊆ D(B) implies w(α) ∈ D(B). Here we note that the sets D(B) (of designated values) often are characterized by equations 6 . This relation is the same as the relation between standard matrix semantics with algebraic semantics, where the set of designated values of a matrix M can be (sometimes) characterized by equations. The construction sketched above was applied in Example 2.1, as in Example 3.1. In the last case, note that the discriminant pair (β, ~a) is given by: β = q1 ⊃ p, ~a = a1 =
1 2
(∗)
Hence, f(β,~a) = 12 ⊃ x and, so, f(β,~a) (0) = 12 , f(β,~a) ( 12 ) = 1, and f(β,~a) (1) = 1. Considering that DM ? = { 21 }, we can discriminate the truth-values just using one iteration. From this, h1 (A? ) = {h1 (0), h1 ( 21 ), h1 (1)} = {(0, 1), (1, 0), (0, 0)} (this is the reason of Definition 3.4). Once we got a discriminant pair for L?, we need to characterize h1 (A? ) (in algebraic terms). Here note that h1 (A? ) = {(x, y) ∈ 22 : x ∧ y = 0} (∗∗). Considering that h1 (D? ) = {(1, 0)}, we must define operations ⊃ and ¬ in h1 (A? ) (in such a way that M 0 :=(h1 (A? ), h1 (D? )) be isomorphic to M ? ). For that, recall that, cf. Definition 3.2, (x0 , x1 ) ⊃ (y0 , y1 ) := − x1 ∧ [(x0 ↔ −y0 ) ∧ (x0 ↔ y1 )], −(x0 ∨ x1 ∨ y0 ) ∧ y1 ¬(x0 , x1) := x0 , −(x0 ∨ x1 ) .
6 For instance, in Example 2.1, given R [1,0] (A), (x0 , x1 ) ∈ D[1,0] (A) iff x0 = 1A . The same characterization can be used in Example 3.1
10
What is the “hidden” reason for the definition of ⊃ and ¬ in h1 (A? )? To clarify this, denote (x0 , x1 ) ⊃ (y0 , y1 ):=(z0 , z1 ) (and, similarly, ¬(x0 , x1 ):=(r0 , r1 )). We can see that, for instance, ⊃ is simply defined componentwise and considering, in each component z0 and z1 , convenient boolean expressions 7 that relate the values of all the components involved in the domains with z0 and with z1 . Please check this idea, considering that, for ⊃, we get two functions: f0⊃ : 24 −→2, and f1⊃ : 24 −→2 (since in the domain we have truth-values for x0 , x1 , y0 and y1 , and in the images for z0 and z1 , resp.). In the case of ¬, each component is obtained by truth-functions from 22 to 2. The definition of the (class of) discriminant structures for M ? can be done now. For that, note that the lattice-theoretical characterization of h1 (A? ) can be adapted to every boolean algebra. Also, the boolean expressions used for the definition of the 2-truth-functions can be applied to any boolean algebra, too. This fact allowed us, in Definition 3.2, to obtain the class D? of discriminant structures, as we said. As we have seen in Theorem 3.10, |=D? α iff |=M ? α. But we do not know (up to the moment) if, in general terms, we can define a relation |=DM in such a way that the mentioned theorem is, in fact, a particular case of a general result. This question has an affirmative answer, as we will see later.
4
Some technical results
Summarizing the previous discussion, given a C-matrix M = (A, D), a D.S.S. associated to it would be obtained paying attention to: (i) The discriminant pair (β, ~a). (ii) The lattice-theoretical characterization of the sets hk (A) and hk (D). (iii) An adequate definition of the truth-functions (in hk (A)), corresponding to the connectives of C. The fundamental technical result about discriminant structures, to be proved in the sequel, establishes that the existence of a discriminant pair of the form (β, ~a) is sufficient to guarantee the Discriminant Structures Semantics as a whole. For its demonstration, we need some technical results, starting with some simple facts about boolean algebras: Proposition 4.1. Every subset S of a finite boolean algebra B can be characterized (relatively to B) by an equation, defined in the language of boolean algebras (that is, ∨, ∧, −, 0 and 1). Proof: Since B is finite, it is isomorphic to 2r for a certain r ∈ ω. So, we can consider the elements of B as r-tuples conformed by 0 and/or 1. On the other hand, every S ⊆ B can be identified with a set S ⊆ 2r . Consider now 7 That is, an expression built on the boolean language, using the symbols ∧, ∨, −, 1, 0 and ≈. In our example (which motivates the characterization of Definition 3.2), we have started considering a normal form, and then we have simplified it.
11
the characteristic function χS : 2r −→2. Since χS is a boolean truth-function, admits a conjunctive normal form (say, α1 ∧ .... ∧ αp , with 1 ≤ p ≤ 2r ). The equation that characterizes S is, actually, α1 ∧ .... ∧ αp ≈ 1. 2 Proposition 4.2. Let M = (A, D) be a C-matrix which admits a discriminant pair (β, ~a), by means of k-iterations, and let hk : A−→2k+1 be the function obtained by (β, ~a). Then there is a matrix M 0 isomorphic to M , whose domain is hk (A). Proof: Define M 0 := (A0 , D0 ), where A0 :=hk (A) and D0 := hk (D), as it was suggested in Section 3. Since hk is injective, hk (A) is equipotent with A (and hk (D) is equipotent with D). Now, for every n-ary operation c define the −1 −1 operation c0 : (A0 )n −→A0 , by: c0 (x1 , . . . , xn ):= hk (c(hk (x1 ), . . . , hk (xn )). This definition makes sense because hk is injective. From this, it can be easily proved that M 0 ∼ 2 = M. Remark 4.3. Note here the following fact about M 0 : if c0 : (A0 )n −→ A0 is the truth-function associated to c ∈ C, it can be defined componentwise. This entails that, for every 0 ≤ i ≤ k, exists a truth-function fi : 2n(k+1)−→2 defined as fi (~x1 , . . . , ~xn ):=πi (c0 (~x1 , . . . , ~xn )) (where ~x1 , . . . , ~xn belong to 2k+1 ). Every function fi has an equivalent conjunctive normal form (c.n.f.) fi 0 : 2n(k+1)−→2. So, the operations in A0 can be defined now in this alternative way: for every n n-ary connective c ∈ C, its corresponding operation cA0 : A0 −→A0 is given by: cA0 (~x1 , . . . , ~xn ) = (f 0 i (~x1 , . . . , ~xn ))0≤i≤k , where f 0 i : (A0 )n(k+1)−→A0 is the c.n.f. found previously 8 . Formally, if c is a n-ary connective of C, and 0 0 {~x1 , . . . , ~xn } ⊆ A0 , then cA0 (~x1 , . . . , ~xn ):= (f0 (~x1 , . . . , ~xn ), . . . , fk (~x1 , . . . , ~xn )). The previous proposition suggests the following: Definition 4.4. Consider M = (A, D) admitting a discriminant pair, and the equations eqA , eqD , which characterize hk (A) (resp. hk (D)) as subsets of 2k+1 , cf. Proposition 4.1. For every boolean algebra B, the discriminant structure associated to B is the C-matrix (R(B), D(B)), with: R(B) := {~x ∈ B k+1 : ~x satisfies eqA }, D(B) := {~x ∈ B k+1 : x ~ satisfies eqD }9 . On the other hand, for any n-ary connective c, for every {~x1 , . . . , ~xn } ⊆ R(B), 0 0 cR(B) (~x1 , . . . , ~xn ):= (f0 (~x1 , . . . , ~xn), . . . , fk (~x1 , . . . , ~xn )), where f0 0 , . . . , fk 0 are the c.n.f found in Remark 4.3, but applied to R(B), in each case. The following results are valid in D.S.S. First of all, they are well defined. Proposition 4.5. For every boolean algebra B, the set R(B) is closed by the operations cR(B) . That is, R(B) is well defined as an algebra. 8 The existence of a conjunctive normal form is not essential itself. The relevant point is that the functions fi can be expressed in a boolean language. We also could use disjunctive normal forms, for example. 9 If x ~ satisfies eqD , then ~ x satisfies eqA , too. Hence, D(B) ⊆ R(B).
12
Proof: Our claim is just a reformulation of the following fact: Let S ⊆ 2k+1 , characterized by the equation χS ≈ 1, and let n be in ω. Consider the truthfunctions fi : 2(k+1)n −→2 (expressed in boolean terms by f 0 i ), and define the function f : 2(k+1)n −→2 by f(x~1 , . . . , x~n ):=(f 0 i (x~1 , . . . , x~n ))0≤i≤k . Define, for every boolean algebra B, the set R(B):={~x ∈ B k+1 : χS (~x) ≈ 1}, and the function f (x~1 , . . . , x~n ) as before, but interpreting in B the boolean connectives involved. Then, if ~x1 , . . . , ~xn ∈ S implies f (~x1 , . . . , ~xn ) ∈ S, it is valid that x ~ 1 , . . . , ~xn ∈ R(B) implies f (~x1 , . . . , ~xn ) ∈ R(B). And this result is valid because we are defining our sets and functions by means of equations, and because every equation valid in 2 is valid in B, too. 2 In addition, the sets D(B) can be equationally characterized in a simpler way, in the context of R(B): Proposition 4.6. For every ~x = (x0 , . . . , xk ) in R(B), ~x ∈ D(B) iff x0 = 1B . ∼ M 0 =(R(2), D(2)), we have that, for every ~x = (x0 , . . . , x1) Proof: Since M = in R(2), ~x satisfies eqD iff there is a ∈ D such that hk (a) = ~x. But a ∈ D ~ ∈ R(2), are equivalent facts: implies π0 (h(a)) = 1. This proves that, for every x - ~x satisfies eqD . - ~x satisfies the equation eqR(2) ≈ 1 and x0 = 1. - ~x satisfies eqR(2) ∧ p0 ≈ 1. So, we have proved our claim for R(2), which can be extrapolated to every R(B), since the required properties are characterized by boolean equations. 2 Besides, it is always possible to “recover” the canonical discriminant structure, by means of prime filters. For that, note some obvious facts: obviously, if B1 and B2 are isomorphic boolean algebras, then (R(B1 ), D(B1 )) and (R(B2 ), D(B2 )) are isomorphic C-matrices. From this, and noting that, for every boolean algebra B, for every prime filter ∇ of B, B/∇ is a boolean algebra (which is isomorphic to 2) we have that (R(B/∇), D(B/∇)) is a matrix isomorphic to M 0 , given in Proposition 4.2. From this, we have: Corollary 4.7. Let M = (A, D) a finite matrix with r elements. Then, for every boolean algebra B, for every prime filter ∇ of B, it holds: (a) The set R(B/∇) has exactly r elements. (b) Every (b0 , . . . , bk ) ∈ R(B) belongs to exactly one class of R(B/∇). All the classes of this last set are defined as (b0 , . . . , bk ) = (b0 , . . . , bk ). (c) The function e∇ : R(B)−→R(B/∇) given by e∇ (b0 , . . . , bk ) = (b0 , . . . , bk ) is a matrix epimorphism. Moreover, e∇ (b0 , . . . , bk ) ∈ D(B/∇) iff b0 ∈ ∇. Proof: Since R(2) has r elements we have (a), and (b) follows straightforwardly. For (c), the proof is as in Proposition 3.9 10 , using items (a) and (b). 2 Definition 4.8. The class of all the discriminant M -structures will be denoted by DM . This class defines the “local consequence relations” |=R(B) and 10 The results stated in (a) and (b) are the abstractions of Proposition 3.8. On the other hand, (c) is, of course, a generalization of Proposition 3.9.
13
the “global consequence relation” |=DM as in Definition 1.2. That is: if R(B) is in DM , then Γ |=R(B) α iff, for every R(B)-valuation w : L(C)−→R(B) such that w(γ) T satisfies eqD for every γ ∈ Γ, it is valid that w(α) satisfies eqD . And |=DM := |=R(B) (with R(B) ranging in DM ). Theorem 4.9. If a finite C-matrix M = (A, D) admits a discriminant pair, then the previously defined semantics DM verifies |=DM α iff |=M α, for every α ∈ L(C). Proof: Suppose that there is a formula β(p, q1 , . . . , qm), a t-uple ~a = (a1 , . . . , am) of elements of A (with |A| = r ∈ ω) and a number k ∈ ω such that the function hk : A−→2k+1 is injective. By Proposition 4.1, the sets hk (A) and hk (D) can be characterized by means of equations (which we will denote, respectively, as eqA and eqD ). These equations define the class of discriminant structures DM , cf. Definition 4.8. We will prove that |=M α iff |=DM α. First of all, if we consider M 0 = (hk (A), hk (D)) of Proposition 4.2 (which is, actually, a discriminant structure), we have that |=DM α implies |=M 0 α, which is equivalent to |=M α. For the other implication, suppose that exists a boolean algebra B and a R(B)-valuation w : L(C)−→R(B) such that w(α) ∈ / D(B). So, w0 (α):=π0 (w(α)) 6= 1B , by Proposition 4.6. There is a prime filter ∇ of B, with w0 (α) ∈ / ∇. From this and Corollary 4.7 (c), there is a matrix epimorphism e∇ : R(B)−→R(B/∇), with e∇ (w(α)) ∈ / D(B/∇) (because the first component of the elements of D(B/∇) is always ∇, and w0 (α) 6= ∇). So, e∇ ◦ w : L(C)−→R(B/∇) is the R(B/∇)valuation that verifies (e∇ ◦w)(α) ∈ / D(B/∇). Hence, 6|=R(B/∇) α. Equivalently, 6|=M 0 α (since R(B/∇) ∼ 2 = R(2) ∼ = M 0 ), and so 6|=M α. Remark 4.10. Note that Theorem 4.9 establishes a weak adequacity between the relations |=M and |=DM . It is part of a future work the proof of a strong adequacity (that is, that Γ |=M α iff Γ |=DM α). At this point we have seen that semantics of the form DM are, in fact, a sort of “convenient representations” of certain matrix semantics M . Moreover, Theorem 4.9 establishes that this kind of semantics can be obtained when a discriminant pair is found. We will see now, however, that it is not always possible to obtain such a pair, for a given matrix M . The next example shows a particular case of it: Example 4.11. Consider the Urquhart logic U rq, characterized by the {∗}matrix MU rq = (A, D) = (({I, II, III, IV, V }, {∗}), {I}), where ∗ is a binary operation, indicated in the following truth-table: ∗ I II III IV V
I V V V V V
II V V V V V
III I II V V V 14
IV I I V V V
V V V V V V
This logic was defined by A. Urquhart in [11], to show an example of a matrix logic that cannot be axiomatized by a finite set of structural rules. It is possible to prove that MU rq not admits a discriminant pair. This fact is demonstrated in the sequel. Proposition 4.12. For every formula α ∈ / V, for every t-uple ~a of elements of {I, II, III, IV, V }, there are two elements a0 , a1 ∈ / D, a0 6= a1 , such that α(~a, a0 ) = α(~a, a1 ) = V . Proof: We will prove our claim by induction on n, the number of ocurrences of the connective ∗ in α (where n ≥ 1, because α ∈ / V). When n = 1, then α = p ∗ q1 or α = q1 ∗ p 11 , and ~a ∈ A. The five truth-functions obtained for the first case are f(α,I) = I ∗ x,..., f(α,V ) = V ∗ x. For these functions, consider a0 = II, and a1 = V , and so the result is valid. For the second case we have the truth-functions g(α,I) = x ∗ I,..., g(α,V ) = x ∗ V . Here a0 = IV , a1 = V . Now, suppose that the result is valid for every m ≤ n, and suppose α = α(p, q1, . . . , qt) and ~a = (a1 , . . . , at ) is any t-uple of At , where α has n ocurrences of ∗. Then α = β ∗ γ. If β, γ ∈ V we return to m = 1, which was already analyzed. So, suppose (without loss of generality), that β ∈ / V. By I. H., there are a0 6= a1 such that β(~a, a0 ) = β(~a, a1 ) = V . Now, realizing that V ∗ x = x ∗ V = V , for every x ∈ A, we have that α(~a, a0 ) = β(~a, a0 ) ∗ γ(~a, a0 ) = V ∗ γ(~a, a0 ) = V , and α(~a, a1 ) = V using the same argument. If β ∈ V, then γ ∈ / V and use the same reasoning. This concludes the proof. 2 The previous conclusion entails, obviously: Corollary 4.13. The matrix MU rq does not admit discriminant pairs. Proof: Note that Proposition 4.12 implies that there are not pairs (β, ~a) that can discriminate all the values of A by one iteration. But, noting that V is an absorbent element of MU rq we have that, for every pair (β, ~a), for every k ∈ ω, there are two different elements a0 , a1 ∈ A such that hk (a0 ) = hk (a1 ) = (0, 0, . . . , 0) (k + 1 times), and so they cannot be discriminated. 2 This last result shows that, despite the simplicity of the process to obtain a D.S.S., the basis of it (that is, the existence of a discriminant pair) is not trivial.
5
Discriminant Structures and Algebraizability
The expressibility of a matrix semantics by means of discriminant structures entails certain nice properties in algebraic terms, according to our point of view. The basic idea is that, cf. Theorem 4.9, a discriminant structure is 11 Of course, in both cases α is the same formula, but we distinguish such variables in α that will be instanced by ~a using q1 , because the truth-functions obtained are not the same, considering that ∗ is not commutative.
15
just a matrix such that: (i) its support is defined on a subset of a boolean algebra and so it can be characterized by means of boolean equations. (ii) its operations are defined by means of boolean functions. Moreover, the tautologies can be explained using boolean equations, too. These facts give to discriminant structures a certain “algebraic character”, in a broad sense. But also deserves to be remarked that this kind of semantics can be applied to logics that are not algebraizable (that is, that not have an equivalent algebraic semantics)12 . This definition is based on the notion of protoalgebraizability. We recall both concepts in the sequel. Definition 5.1. A logic L = (CL , `L ) is protoalgebraic if and only if exists a set P R(p1 , p2 ) = {φi (p1 , p2 )}i∈I ⊆ L(CL ) such that verifies 13 : (R) `L φi (p1 , p1), for every φi ∈ P R(p1 , p2 ). (M P ) p1 , P R(p1 , p2 ) `L p2 . On the other hand, L is algebraizable iff exists a class K of algebras that constitutes an equivalent algebraic semantics for L. For instance, classical logic is algebraizable (being K the class of boolean algebras). Also, Intuitionistic Logic and Lukasiewicz logic are algebraizable (its equivalent algebraic semantics are the class of Heyting Algebras and M V -algebras, resp.). It must be remarked that every algebraizable logic is also protoalgebraic. This fact will allow us to show a logic that verifies: (a) It admits Discriminant Structures Semantics. (b) It is not algebraizable. In fact, this logic is L? , the basis of our Example 3.1. As it was already proved, L? verifies (a). With respect to (b), our proof is based on the following fact: L? has not tautologies, as we will see now. For that, consider the following definitions and notations. First, recall the function P ar : ω2 −→2 defined in the obvious way: P ar(i, j) = 1 iff i and j are both odd or are both even. Besides that, the order of a formula α is the number of different atomic formulas that appear in α. With this in mind, we have: Proposition 5.2. For every α ∈ L(CL? ), α of the form ¬k p, α is not a tautology. Proof: Obvious. Proposition 5.3. For every α = α(p) ∈ L(CL? ), if α contains at least one symbol ⊃, then, for every valuation v, v(α) 6= 12 . Proof: By induction on k ≥ 1, the number of the implications in α. If k = 1, then α is the form ¬i p ⊃ ¬j p. So, by the truth-table of ¬ we have that, for every v, v(α) = 0 or v(α) = 1 (depending of P ar(i, j)). Suppose that our assumption holds for every k < n, and consider α containing n ocurrences of ⊃. We can see α as ¬t (β ⊃ γ), with 0 ≤ t. By I.H., v(β) 6= 21 6= v(γ). By the truth-table of ⊃, v(β ⊃ γ) 6= 12 , and so (by the truth-table of ¬), v(α) 6= 21 . 2 12 The
relation between an algebraizable logic and its equivalent algebraic semantics is similar to the one existing between classical logic and the class of Boolean Algebras, or the relation between Intuitionistic Logic and the class of Heyting algebras. Again, a very complete text about Algebraizability and Abstract Algebraic Logic in general terms is [2]. 13 Usually, P R(p , p ) is denoted as p P Rp 1 2 1 2
16
The previous proposition entails two results. First of all, considering Propositions 5.2 and 5.3, we have the following fact: Corollary 5.4. For every formula α of order 1 (α = α(p)), it holds that 6|=M ? α. Proposition 5.5. For every α ∈ L(CL? ), 6|=M ? α. Proof: Let α = α(x1 , . . . , xn) be in L(CL? ), and consider now the formula α0 := α(x1 , x2 /x1 , . . . , xn /x1 ). So, α0 is of order 1. By Corollary 5.4, there is a valuation v, and an element a ∈ A? , such that v(x1 ) = a, and v(α0 (x1 )) 6= 12 . Defining now the valuation wv : V−→A? such that wv (x1 ) = · · · = wv (xn ) = a, it is easy to see that wv (α) = v(α0 ) 6= 21 , and therefore 6|=M ? α. 2 On the other hand, we can relate logics without tautologies and protoalgebraizability in this way: Proposition 5.6. If a logic L = (CL , `L ) is a logic without tautologies, and `L can be defined by a matrix M = (A, D), where ∅ = 6 D 6= A, then L is not protoalgebraic (and, so, is not algebraizable). Proof: Suppose L protoalgebraic, such that `L = |=M . Then, there is a set P R(p, q) ⊆ L(CL? ) satisfying P R(p, p) ⊆ CnM (∅), and q ∈ CnM (P R(p, q) ∪ p). Since L has not tautologies, P R(p, q) = ∅. So, p |=M q. Now, if we consider any valuation v such that v(p) ∈ D meanwhile v(q) ∈ / D (v exists because ∅ = 6 D 6= A), we have p 6|=M q, which is absurd. So, L cannot be nor protoalgebraic. Neither algebraizable, therefore. 2 So, from Definition 5.1 and Propositions 5.5 and 5.6, and realizing that the set of designated values of L? is { 12 }, we have: Corollary 5.7. L? is not algebraizable.
6
Separation of truth-values
As presented above, semantics based on discriminant structures are motivated by a recurrent idea in the field of many-valued logics which can be stated as follows: the truth-values of a matrix M = (A, D) can be often separated (or discriminated), according to its relation with the designated set D. This approach was already applied, with several purposes. For example, the separation of truth-values can determine whether certain formulas are synonymous or not (see [10], for example). In the same line, the Leibniz Operator Ω (which provides an alternative definition to protoalgebraic and algebraizable logics), is defined by means of the notion of indiscernible formulas (relatively to the designated set D), which in a certain sense is based on separation of truth-values, too. Also, several definitions of two-valued (non truth-functional) semantics make use of this notion. One example of such construction is Dyadic Semantics (see
17
[1]), which has certain similarities with Discriminant Structures Semantics, according to our point of view. So, we will discuss briefly here the relationship between both constructions. For this, the notation of [1] was modified, for a better comparison with this article. Roughly speaking, a dyadic semantics for a given matrix logic L = (CL , |=M ) induced by a n-valued matrix M = (A, D) is built on a basis of: • A set of formulas {φi }1≤i≤k , where (for every i), φi =φi (p) ∈ L(CL ). • A function h : A−→2 such that, for every φi , for every a ∈ D, h(φi (a)) = 1 iff φi (a) ∈ D. In addition, the truth-functions associated to the set {φi }0≤i≤k (together with h) separates, actually, the values of A, by means of (k + 1)-tuples of elements of 2, and therefore it is a generalization of the formula β of our discriminant pair. The scope of Dyadic Semantics is in fact stronger of the D.D.S, wherein just only formula β (eventually iterated) is allowed. On the other hand, Dyadic Semantics is based in the existence of one-variable formulas of L(CL ), instead of the truth-functions used in this article. In fact, in D.S.S., the t-uple ~a of elements in the discriminant pair (β, ~a) allows to construct the (one variable-depending) truth-functions f(β,~a) , without the need of associated formulas in L(CL ). In other words, meanwhile Dyadic Semantics is focused more strictly on the involved languages, D.S.S analyze mainly the matrices used. It must be noted that if a logic is functionally complete, then every truth-function has an associated formula that describes it, and so it would be possible to “jump” from the matrices to the formal languages themselves. Besides that, an “hybrid method” of separation of truth-values can be done. Consider simply that every truth-value of a matrix M = (A, D) is tested by a set f(βi ,a~i ) of discriminant (non-iterated) pairs. An informal example of this will be applied to the logic U rq. Proposition 6.1. The truth-values of the logic U rq of Example 4.11 can be discriminated by a set {fi }0≤i≤4 (with fi = f(βi ,a~i ) ) of truth-functions. Proof: The following schema shows the formulas which discriminate every truth-value of U rq, and the indentification of each truth-value by means of the function h4 : A−→25 , given by h4 (a) =(χD (fi (a)))0≤i≤k , for every a ∈ A: f0 = x f1 I II III IV V
=x∗I I V V V V
f2 = x ∗ II I I V V V
f3 = I ∗ x V V I I V
f4 = II ∗ x V V V I V
h4 (1, 1, 1, 0, 0) (0, 0, 1, 0, 0) (0, 0, 0, 1, 0) (0, 0, 0, 1, 1) (0, 0, 0, 0, 0)
So, all the truth-values of U rq can be discriminated by a convenient set of truth-functions. 2 18
It must be indicated that in this example is not developed the method that “explains” the operation ∗ of MU rq . Moreover, it is not clear here (as in Dyadic Semantics in general terms) in which way an algebraic treatment can be done, but of course these topics deserve a deeper treatment, in the future.
7
Twist-Structures Semantics, revisited
We conclude this paper returning to our motivating construction, which is Twist-Structures Semantics. Note at this point that in our proposed generalization a key concept is missed: the eventual torsion of some axes (which, as it was already said, suggests the name of the twist-structures). This process is not considered in the general definition of Discriminant Structures Semantics. Why is the reason of this torsion? Mainly, twist-structures, even being considered as C-matrices, usually were analyzed w.r.t its lattice-theoretic behavior. Under this perspective, the designated elements of every structure of the form R(A) are usually interpreted as the last elements (according to certain convenient order relation ≤R(A) in R(A)). For that, it is necessary that the second axis will be considered with a inverted order: if not, the designated elements in R(A) are not the last elements according to ≤R(A) . See the mentioned references [3], [4], [7] and [12], as motivating examples of this idea. Besides that, twist-structures were usually applied to logics with certain common properties. Between them, the existence of an implication connective ⊃ which is related to the structure R(A) as usual: x ≤R(A) y iff x ⊃ y = 1R(A) . Again, this last condition can be better explained when 1R(A) is the last element with respect to ≤R(A) . In addition, the consequence relations that can be expressed by Twist-Structures Semantics are defined on languages wherein ⊃ is related at some extent with a “negation”-connective (∼, or ¬). This forces the “torsion” of the second axis in such structures. For example, in the works of M. Fidel and D. Vakarelov about the logic N, this new semantics explains the behavior of counter-examples (that are obviously related with the negations). In this context, it seems like an obvious presentation that the second components of every pair (that, in fact, “explain” the counter-examples) will be considered with its dual order. A similar fact happens in the case of the logic I 1 P 0 , sketched in Example 2.1. In this last case, in addition, the operations of the second axis B ∗ can be explained in terms of B and, therefore, the definition of the operations in the twist-structures can be simplified in some sense. This does not happen in the case of N, because its Twist-Structures Semantics is defined on Heyting Algebras (whose dual is not a Heyting Algebra). Thus, in the case of N, the “torsion” is more necessary than in the case of I 1 P 0 . Turning back to D.S.S., note that they can be applied to a greater class of logics, since they are not depending neither on considerations of order relations,
19
nor on the existence of implications and/or negations. Since the treatment of these last semantics is more abstract, they can be used in a general way, regardless its intuition. However, from the above exposed, even if we consider that Discriminant Structures Semantics is a powerful generalization of TwistStructures Semantics, the latter method is more intuitive, mainly in “natural” logical matrices, wherein implications and negations are related in some sense.
References [1] C. Caleiro, W. Carnielli, M. E. Coniglio, and J. Marcos. Two’s company: “The humbug of many logical values”. Logica Universalis ( J. Y. Bezi´ au ed.). Birkhauser, Germany: 169–189, 2005. [2] J. Czelakowski. Protoalgebraic Logics. Kluwer Academic Publishers, Dordrecht, 2001. [3] M. M. Fidel. An Algebraic Study of a Propositional System of Nelson. Proceedings of the First Brazilian Conference on Mathematical Logic, Campinas, 1977 ( A. I. Arruda, N. C. A. da Costa, R. Chuaqui eds.). Lect. Notes Pure Appl. Math. 39, 99 – 117, 1978. [4] M. M. Fidel and D. Brignole. Algebraic Study of some Non-Classical Logics by means of Products of Algebras (in spanish). Proceedings of the First “Antonio Monteiro Congress”, Bah´ıa Blanca, Argentina. 25–38, 1991. [5] C. Murciano and F. Ramos. Twist-Structures Semantics for the G¨ odel n-valued logics. Submitted, 2012. [6] S. Odintsov. Algebraic Semantics for Paraconsistent Nelson’s logic. Journal of Logic and Computation, 13:453–468, 2003. [7] F. Ramos and V. Fern´ andez. Twist - Structures Semantics for the Logics of the Hierarchy I n P k . Journal of Applied Non-Classical Logics, 19(2):183– 209, 2009. [8] H. Rasiowa and R. Sikorski. The Mathematics of Metamathematics. PWN, Warszawa, 3rd. edition, 1970. [9] A. M. Sette and W. A. Carnielli. Maximal weakly-intuitionistic logics. Studia Logica, 55:181–203, 1995. [10] T. Smiley. The Independence of Connectives. The Journal of Symbolic Logic, 27:426–436, 1962. [11] A. Urquhart. A Finite Matrix whose consequence relation is not finitely axiomatizable. Reports on Mathematical Logic, 9:71–73, 1977. [12] D. Vakarelov. Notes on N -lattices and Constructive Logic with Strong Negation. Studia Logica, 36:109–125, 1977.
20