Modularity in Many-sorted Term Rewriting Systems - CiteSeerX

2 downloads 0 Views 466KB Size Report
puter Science on the \Rijksuniversiteit Utrecht". I am grateful to Hans. Zantema for being present when I needed him. It sometimes took him 7 hours a day.
Modularity in Many-sorted Term Rewriting Systems Jaco van de Pol Januar 1993

Abstract Many-sorted term rewriting systems (MTRS) are an extension of the formalism of term rewriting systems (TRS). The direct sum of TRS's is generalized to the direct sum of MTRS's. Some equivalence between \taking the direct sum" and \eliminating the sorts" is established: Component closed reduction properties which are resistant against disjoint union (modularity), are also resistant against sort elimination (persistence). The reverse has been proved earlier.

1

1 Introduction 1.1 Related Work Term rewriting systems (TRS's) have been developed in the forties as a model for computation. They were used in those days for the study on the -calculus (Church). Another issue formed the study on completion procedures on equational theories, started by Knuth and Bendix. In this setting, the main goal of term rewriting systems is to give an implementation of a theory of (undirected) equations. The completion procedure delivers a TRS, in which the rules have an unique direction. The applications of TRS's in computer science can be found in the analysis and implementation of functional programming languages and in the study on and implementation of algebraic speci cations of abstract data types. To meet this applications several extensions are proposed: Graph and in nite rewriting, conditional term rewriting, term rewriting with rule priorities and order-sorted term rewriting [GJM85]. The extension to many-sorted term rewriting (which is a special case of order-sorted rewriting) is the main subject of this thesis. More recently is the research on the modularity of properties of term rewriting systems. A property is called modular if it is preserved by the disjointunion-operator, also called the direct sum on TRS's [Toy87b, Toy87a, Rus87, Mid90]. This notion is extended to many-sorted term rewriting systems. This thesis can be seen as a continuation of the research of H. Zantema on the persistence of properties of many-sorted TRS's. A property is called persistent if it is preserved by the sort-elimination-operator on many-sorted TRS's. Zantema proved that persistent properties are modular, for some class of properties (reduction properties or component closed properties). In fact we can partially answer one of the \Concluding remarks" of his article [Zan91, 628]: \It is unknown whether there are modular reduction properties that are not persistent": For modular properties on many-sorted TRS's the answer is: No, there are none.

1.2 Overview and Main Results You are reading my master thesis, which is the closure of my study in Computer Science on the \Rijksuniversiteit Utrecht". I am grateful to Hans Zantema for being present when I needed him. It sometimes took him 7 hours a day. In the rest of this section an overview is given of the contents of this thesis. Chapter 2 gives an introduction to classical term rewriting systems. Familiar readers can jump it over, but perhaps it is better to have a look at section 2.4. This section de nes the notion of \modularity" and summarizes some known results. Chapter 3 describes the extension to many-sorted term rewriting systems. 2

This has been done before, only the notations are di erent. As far as we know, the de nition of the disjoint union of many-sorted term rewriting systems is new. In section 3.5 the notion of \persistence" is de ned. The de nition of the problem of the thesis can be found in section 3.6. In Chapter 4 the main result is proven: The \sort elimination"-operator on a many-sorted term rewriting system can be simulated by taking the disjoint union of modi ed copies of the many-sorted term rewriting system. This result can be used to prove the persistence of some class of modular properties for many-sorted term rewriting systems. This proof is given in section 4.3.

3

2 Term Rewriting Systems 2.1 Signatures and Terms Terms are generated by a set of function symbols and a set of variables. The set of function symbols is typically denoted by F . The set of variables is denoted by V . We require the sets of function symbols and variables to be disjoint, so V \F =? A function symbol is `syntactically applied' to its arguments. For every function symbol the number of arguments is xed and it is called the arity. The arity is given by the function: arity : F ! IN;

in which IN is the set of natural numbers including 0. An (unsorted) signature  is a tuple (F ; V ; arity). Now we can de ne the terms over  = (F ; V ; arity) inductively:

8 >> V  T () < f (t1; : : : ; tn ) 2 T () i (1) f 2 F ; >> (2) arity(f ) = n and : (3) t1 ; : : : ; tn 2 T ()

If n = 0 we omit the (). We say that f is `syntactically applied' to the arguments t1 ; : : : ; tn . If the signature  is clear, we use the abbreviation T for T (). The set of terms is de ned inductively. Proofs on terms can be inductive on the structure of the term. For this reason, it is practical to have a measure for terms. This is given by the following de nition: depth(x) = 1

depth(f (t1 ; : : : ; tn )) = 1 + 1max depth(ti ) in

In this de nition the maximum over an empty set is zero. The root of a term is its outermost symbol: root(x) = x

if x 2 V

root(f (t1 ; : : : ; tn )) = f

if f 2 F :

Let  be a signature, with variables V . Then the set T consists of all terms (over ), as described above. A substitution is a mapping

:V !T: 4

Instead of (x) we will write: x . Substitutions can be extended to general terms by de ning: (f (t1 ; : : : ; tn )) = f (t1 ; : : : ; tn ) If a variable occurs twice in some term, then after the application of a substitution, the variable is replaced twice by the same term, because the substitution is a function on variables.

Proposition 2.1 Let l 2 T n V and let  be a substitution. Then root(l) = root(l ).

Proof: Because l 2= V , l = f (t1; : : : ; tn) for some f 2 F and n = arity(f ). Then l = f (t1 ; : : : ; tn ). So root(l) = root(l ) = f .

2

2.2 Term Rewriting Systems A term rewriting system T is a signature F and a nite set R of pairs of terms (l; r) for which:

 l 2= V  Var(r)  Var(l) Such pairs are called rewrite rules and written as l ! r. The rewrite relation !R is de ned by: s !R t for s; t 2 T (F ) if and only if this can be proved using the following proof rules:

 l !R r for every  : V ! T (F ) and l ! r 2 R  f (t1; ::; tj ; ::; tn ) !R f (t1; ::; t0j ; ::; tn ) if tj !R t0j . The rst rule is an axiom scheme, the second rule is a derivation rule scheme. The rst rule closes the rewrite relation under substitution, the second under context. A rewrite step s !R t is called a reduction step. When we have to prove some property of a reduction step, this can be done with induction on the structure of the proof that it is a reduction step. Here l is called the redex, a reduction step of the form l !R r is an elementary reduction step (or a reduction step on top level). We say that s reduces to t if s !+R t, where !+ is the transitive closure of !. If s doesn't reduce to any term, we say s is a normal form, or s is in normal form. The set of normal forms of T is written as NF(T ). A TRS T only depends on the signature F and the set of rules R. This can be denoted by T = (F ; R). 5

2.3 Properties of Term Rewriting Systems We are going to discuss features of properties of term rewriting systems. Therefore it helps our intuition to de ne some concrete properties. There are some well known properties of term rewriting systems, which are particularly useful for applications. There are two kinds of properties: properties on the reduction relation and syntactic properties. The rst kind can be discussed in the general setting of abstract reduction systems. The second kind depends on the form of the rules of a term rewriting system.

2.3.1 Abstract Reduction Systems An abstract reduction system (ARS) A is a set A with a binary relation R. Term rewriting systems are a special kind of ARS, with the terms T (F ) as the underlying set A and the reduction relation !R as binary relation R on A. The relation R is called reduction relation and written as !, with transitive closure !+ and transitive and re exive closure ! . The transitive, re exive and symmetric closure is denoted by =R and this is called the convertibility relation. Syntactic equality between terms is denoted by . An element a 2 A is called a normal form (NF(a)) if there is no x 2 A with a ! x. An element a 2 A has a normal form if there is a b 2 A with NF(b) and a ! b. Two elements a; b 2 A are called joinable (#) if they have a common reduct: a # b if there is a c 2 A with a ! c and b ! c. Properties on abstract reduction systems are in fact properties of relations. This sort of properties is often called reduction properties. Some important reduction properties are shortly described below.

strongly normalizing An ARS A is called strongly normalizing (SN) if there are is no in nite sequence a0 ; a1 ; : : : with ai 2 A and ai ! ai+1 for all i 2 IN. So every reduction ends in a normal form. For term rewriting systems this means: there is no in nite reduction. Another terminology is: terminating or Noetherian.

weakly normalizing An ARS A is called weakly normalizing (WN) if every element a 2 A has a normal form. con uent An ARS A is called con uent or Church-Rosser (CR) if every two reducts of any term are joinable: If a ! b and a ! c then b # c. weakly con uent An ARS A is called weakly con uent (WCR) if the following holds: If a ! b and a ! c then b # c. 6

unique normal forms An ARS A has unique normal forms (UN) if any

two convertible normal forms are identical. Or stated otherwise: Every equivalence class under \=" has at most one normal form.

UN with respect to reduction An ARS A has unique normal forms with respect to reduction (UN! ) if no element of A reduces to di erent normal forms.

normal form property An ARS A has the normal form property (NF)

if every element convertible to a normal form, reduces to that normal form. For a discussion on these properties and a proof of the relations between them we refer to [Mid90, x1.1]. Here we only state the relationship between these properties, in the same form as A. Middeldorp in the just cited book. (see gure 1)

Figure 1: The relationship between some reduction properties. All these properties say something about the whole reduction relation. But if we analyze them, we see that it is enough to verify them for every equivalence class generated by the reduction relation. This feature on properties is called component closed. The formal de nition is given below and can be found in [Zan92, 67]. Let A = (A; R) be some ARS. Then with Ai we denote the family of equivalence classes under the relation =R . With Ri we denote the relation R restricted to the equivalence class Ai . We de ne Ai = (Ai ; Ri ). A property P on abstract reduction systems is component closed, if for all ARS A P (A) () (8i : P (Ai )): All of the properties of gure 1 are component closed, due to the following proposition:

Proposition 2.2 Let A = (A; R) be an ARS. Let t; t0 2 A and let [t] be the equivalence class of t under =R . Then

7

   

t ! t0 in A () t ! t0 in [t]. t ! t0 in A () t ! t0 in [t]. t =R t0 in A () t =R t0 in [t]. NF(t) in A () NF(t) in [t].

Proof: The rst three assertions follow from the fact that by de nition,

the equivalent classes are closed under the re exive, symmetric and transitive closure of the reduction relation !. The last assertion follows immediately from the rst.

2

Now it can be veri ed easily that SN, CR, UN, UN! , NF, WCR and WN are component closed. An example of a property that is not component closed is the (not so interesting) property OON (only one normal form): for all t and t0 , if NF(t) and NF(t0 ) then t  t0 . The following example shows that OON is not component closed: Example 1 Let A = f a ; b ; c g and let a ! b . Then the equivalence classes are A1 = f a ; b g and A2 = f c g with a ! b . Both A1 and A2 are OON, but A is not, because b and c are di erent normal forms. . ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: .

2.3.2 Syntactic Properties There are some well known properties on term rewriting systems, that depend on the form of the rules. These are called syntactic properties in contrast with reduction properties, that depend on the reduction relation. A term is called linear if every variable occurs at most once in it. A rule is called left/right linear if its left/right hand side is linear. A term rewriting system is called left/right linear if all rules are left/right linear. A rule is a duplicating rule if there is a variable with more occurrences in the right hand side than in the left hand side. A collapse-rule is a rule with a variable as right hand side. An TRS is called duplicate/collapse free if it contains no duplicating/collapse rules.

2.4 The Direct Sum and Modularity It is good programming practice to divide large programs into smaller modules. This insight has led to the study on the union of TRS's. However, it turns out that the union of two TRS's doesn't inherit important properties of the original TRS's. See [Mid90, 29] for examples. The reason is that the TRS's can share function symbols. Therefore only the union of disjoint 8

TRS's is regarded. Let T1 = (F1 ; R1 ; T ) and T2 = (F2 ; R2 ; T ) be two unsorted TRS's with F1 = (F1 ; V1 ; arity1 ) and F2 = (F2 ; V2 ; arity2 ). Then T1 and T2 are disjoint if and only if the underlying sets of function symbols are disjoint: F1 \ F2 = ?, so they share no function symbols. Without loss of generality we also require that V1 and V2 are disjoint. We can combine arity1 and arity2 to a new function arity: arity : F1 [ F2 ! S  (

arity1 (f ) if f 2 F1 arity2 (f ) if f 2 F2 This construction is simple, because F1 and F2 are disjoint. Therefore we will drop the subscripts of functions like arity, sort and st in the sequel. Now F = F1  F2 = (F1  F2 ; V1  V2; arity), where  denotes the union for disjoint sets. The disjoint union of the TRS's T1  T2 is de ned as the new TRS (F ; R ; T (F )). In other words: T = (F ; R ; T ). This is known in the literature [Toy87b, Rus87] as the direct sum. This de nition of the arity(f ) =

direct sum of TRS's can also be found in [Mid90, 31] and originates from [Toy87b, 131]. We can ask about the in uence of the disjoint union operator on the properties of the two term rewriting systems. Does taking the direct sum preserve important properties, as con uence, strong normalization etc.? Properties that are resistant against the disjoint union operator are called \modular". This feature states that some property holds for the direct sum of two TRS's if and only if it holds for the original ones. So we de ne for a property P on TRS's: P is modular i for all term rewriting systems R1 and R2 holds:

P (R1  R2) () (P (R1 ) ^ P (R2 )) This de nition is equivalent to the de nition in [Mid90, 29].

2.5 Previous Results The modularity of many properties is known, due to recent research by Toyama, Rusinowitch and Middeldorp. For a detailed overview on these results [Mid90] can be read. In 1987 Toyama presented his famous counter example, showing that strong normalization is not a modular property for term rewriting systems [Toy87a, 141]. It reads as follows: the following TRS's are both terminating. R1 = fF (0; 1; x) ! F (x; x; x)g ( G(x; y) ! x R2 = G (x; y) ! y But R1  R2 contains an in nite reduction: 9

F (G(0; 1); G(0; 1); G(0; 1))

! F (0; G(0; 1); G(0; 1)) ! F (0; 1; G(0; 1)) ! F (G(0; 1); G(0; 1); G(0; 1)) ! 

Yet there are conditions under which modularity of termination holds. Rusinowitch showed in [Rus87] that if term rewriting systems R1 and R2 both lack collapsing rules, or if they both lack duplicating rules, then the direct sum R1  R2 is terminating if and only if R1 and R2 are terminating. Middeldorp showed that the following condition was sucient to have modularity of termination: One of the term rewriting systems contains neither collapsing rules nor duplicating rules [Mid89]. Furthermore, Toyama proved the modularity of con uence (CR) in [Toy87b]. This proof is presented more accessible in [KMTdV91]. Middeldorp proved in [Mid90] the modularity of weak con uence (WCR) and unique normalization (UN). Furthermore he describes a proof for the modularity of weak normalization (WN), due to Kurihara and Kaji [KK88]. Finally, there are counterexamples for the modularity of the normal form property (NF) and the unique normal form with respect to reduction (UN! ) [Mid90, p.97, p.101]. However, NF is modular for left-linear TRS's [Mid90, p.100].

10

3 Many-sorted Term Rewriting 3.1 Many-sorted Signatures and Terms We can put a sort restriction on terms as follows: every function symbol gets a sort (type) and a sequence of sorts, de ning which sort it expects from each of its arguments. Variables get a sort too. Let S be a set of sorts and let S n be the set of n-tuples over S . By S  we denote the set of nite sequences over S . As in Chapter 2.2 we de ne a set of function symbols F with a function arity : F ! IN and a set of variables V . Now we de ne functions: st : F [ V ! S ar : F ! S  ; with the restriction that ar(f ) 2 S n for n = arity(f ). To get the i-th element of ar(f ) we use an extra argument rather than a subscript, to achieve better

readability. So we write:

ar(f ) =< ar(f; 1); ar(f; 2); : : : ; ar(f; n) >;

where n = arity(f ) and ar(f; i) 2 S . An S-sorted signature F is a tuple (F; V ; S; arity; st; ar), where the domain and range of st and ar are de ned before. Again we will write f 2 F instead of f 2 F by convention. Example 2 Let S = f0; 1; 2g. Let V = f x ; y ; z g and F = f a ; b ; c ; f ; g ; h g. Instead of writing arity( f ) = 2; ar( f ) = (0; 1) and st( f ) = 2 it is common to write: f : 0  1 ! 2. Using this notation, we can de ne the S-sorted signature F as: x :0 a :0 f :0  1 ! 2 y :1 b :1 g :2 ! 0 z :2 c :2 h :1 ! 0 . ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: .

The sort of a term t is de ned as the sort of the outermost symbol, so we have: sort(t) = st(root(t)): We want to de ne a special subset of all terms, consisting of the terms that are well-sorted. The well-sorted arguments of a term f (t1 ; : : : ; tn ) 2 T (F ) is a set of indexes: wsa(f (t1 ; : : : ; tn )) = fi 2 f1; : : : ; ng j sort(ti ) = ar(f; i)g

11

Let F be a signature with variables V . The set of well-sorted terms WT (F ) is de ned inductively by 8 V  WT (F ) > > > > > < f (t1 ; : : : ; tn ) 2 WT (F ) > > > > > :

i (1) (2) (3) (4)

f 2 F; arity(f ) = n; t1 ; : : : ; tn 2 WT (F ) and wsa(f (t1 ; : : : ; tn )) = f1; : : : ; ng:

If the signature is clear from the context, then T (F ) and WT (F ) are abbreviated to T and WT . In the S-sorted signature, the de nition of T still makes sense. It simply does not make use of S , st and ar. Up to restriction (4) the de nitions of WT and T coincide, so

WT  T : This justi es the intuition that T is the set of all terms over F , while WT is the set of well-sorted terms over F . From the view point of many-sorted signatures, T contains ill-sorted terms. For a detailed discussion of order-

sorted algebra we refer to [SS87]. Note also that if there is only one sort, then by de nition the set wsa(f (t1 ; : : : ; tn )) = f1; : : : ; ng. Therefore:

Proposition 3.1 If #S = 1 then WT = T . So an unsorted signature can be seen as an S-sorted signature with only one sort. This is called one-sorted. An S-sorted signature with #S > 1 is called many-sorted. The set WT is partitioned by the function sort: For each 2 S we de ne:

WT = ft 2 WT j sort(t) = g:

3.2 Well-sorted Substitutions Proposition 3.2 Let l 2 T n V . Then sort(l) = sort(l ). Proof:

From proposition 2.1 we have that root(l) = root(l ). But then their sorts are equal too, due to the de nition of sort.

2

The notion of well-sorted is de ned for substitutions in such a way that a well-sorted substitution on a well-sorted term returns a well-sorted term: A substitution  is well-sorted if and only if for all x 2 dom(): x 2 WT st(x) :

12

So x is well-sorted and has the same sort as x. Furthermore we de ne: Var(t) = the set of variables occurring in t.

We often want to restrict the in uence of a substitution to the variables of a special term. This can be done by function restriction (). So we get: (  Var(l))(x) =

(

x x

if x 2 Var(l) otherwise

Proposition 3.3 Let l 2 WT s for some sort s 2 S and let  be a substitution. Then l 2 WT s if and only if   Var(l) is well-sorted. Proof: We have that sort(l) = s and that l is well-sorted. ( Let  be well-sorted on Var(l). We will prove that l is well-sorted and sort(l ) = s by induction to the depth of r.

If l = x 2 V , then l = x 2 WT s, because  is well-sorted. If l = c 2 F , then l = l and we are ready. If l = f (t1; : : : ; tn) then l = f (t1 ; : : : ; tn ) and sort(l ) = st(f ) = sort(l) = s. For i 2 f1; : : : ; ng: Var(ti )  Var(l), so  is wellsorted on Var(ti ). From induction hypothesis, we have that ti 2 WT and sort(ti ) = sort(ti ) = ar(f; i). We conclude that l 2 WT s. ) Let l be well-sorted. For any x 2 Var(l) we will prove that x is wellsorted and that st(x) = sort(x ). If l = x 2 V , then Var(l) = fxg and x 2 WT s , so   Var(l) is well-sorted. Otherwise, if l 2= V , each occurrence of x in l must be the argument of some function symbol, say the i-th argument of f . Because l 2 WT ; ar(f; i) = st(x). Because l 2 WT too, sort(x ) = ar(f; i) = st(x) and x 2 WT . So  is well-sorted for variables from Var(l).

2

Proposition 3.4 Let l; r; l 2 WT , l 2= V and Var(r)  Var(l). Then r 2 WT and sort(r ) = sort(r) Proof:

By proposition 3.2 sort(l) = sort(l ). Applying proposition 3.3 we get that   Var(l) is well-sorted. Because Var(r)  Var(l) we have that   Var(r) is also well-sorted. Again with proposition 3.3 we conclude that r 2 WT and sort(r ) = sort(r).

2

13

3.3 Well-sorted Rewriting Let an unsorted signature  (with variables V ) and a set of rules R with left and right hand sides in T () be given. In section 2.2 we have seen the following two restrictions on these rules:

 l 2= V  Var(r)  Var(l) If we extend  with a set of sorts S and functions sort and arity to an S-sorted signature F , we require that the rules are sorted due to the new S-sorted signature. Therefore we introduce: An signature F is compatible with some set of rules, if the following sort restrictions hold for the rules (l ! r):

 l; r 2 WT (F )  sort(l) = sort(r) The TRS does not really change, because we admit ill-sorted terms to be rewritten. If we introduce only one sort then the sort restrictions are satis ed implicitly (see also proposition 3.1). So extending the unsorted signature of some TRS to an one-sorted signature always gives a result which is compatible with that TRS.

Example 3 Let R consist of the following rules: f ( g ( z ), y ) ! z (collapse rule) f ( x ,b) ! c f ( h ( y ), y ) ! c (non left linear rule) g( c ) ! h(b) Then (F ; R) is a TRS with a compatible signature, if we take F from example 2 on page 11.

. ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: .

Proposition 3.5 Let

 (F ; R) be a TRS with a compatible S-sorted signature.  s; t 2 T (F )  s !R t  sort(s) 6= sort(t). Then s = l , t = r and r 2 V for some l ! r 2 R and substitution .

14

So the reduction was in fact an elementary reduction with a collapse rule. Proof: There are two cases for a reduction !R : If s = f (t1; ::; tj ; ::; tn ) !R f (t1; ::; t0j ; ::; tn ) = t with tj !R t0j then sort(s) = st(f ) = sort(t), but we know that this is not the case. So s = l !R r = t for some  : V ! T (F ) and l ! r 2 R, satisfying the sort restriction that sort(l) = sort(r). Furthermore, l 2= V , so sort(l ) = sort(l), with proposition 3.2. Hence  sort(s) = sort(l ) = sort(l) = sort(r) and  sort(s) 6= sort(t) = sort(r ). We get sort(r) 6= sort(r ). Applying proposition 3.2 we conclude: r 2 V.

2

Proposition 3.6 Let (F ; R) be a TRS with a compatible S-sorted signature F , 2 S , s 2 WT and s !R t. Then t 2 WT too. Proof: (induction on the structure of the proof that s !R t.) If s = l and t = r for some  : V ! T and l ! r 2 R, then l; r; l 2 WT (see proposition 3.2) and Var(r)  Var(l). So we can apply proposition 3.4, giving r 2 WT and sort(r ) = sort(r) = sort(l) = sort(l ) = .

If s = f (t1; ::; tj ; ::; tn ) and t = f (t1; ::; t0j ; ::; tn ) with tj !R t0j then we know: 1. f 2 F , because s 2 WT 2. arity(f ) = n, because s 2 WT . 3. 8i : 1  i  n : ti 2 WT . Therefore t0j 2 WT from induction

hypothesis. 4. 8i : 1  i  n : ar(f; i) = sort(ti ), because s 2 WT . Furthermore: sort(t0j ) = sort(tj ) (from ih) = ar(f; j ). From these four statements we know that t 2 WT . Furthermore sort(t) = st(f ) = sort(s) = .

2

This proposition says that the set WT (F ) is closed under rewriting in the TRS (F ; R) if F is a compatible S-sorted signature. It also shows that the set WT is closed under rewriting for any 2 S . If some subset of the terms is closed under rewriting, then it is possible to restrict the rewrite relation to this subset. We introduce a new de nition for restricted TRS's: Let an S-sorted signature F (with variables V ), a set of rules R and a subset of the terms A be given. Then (F ; R; A) is a restricted TRS if: 15

    

l 2= V

Var(r)  Var(l)

l; r 2 WT (F )

sort(l) = sort(r)

A is closed under the unrestricted rewrite relation of (F ; R).

An unrestricted TRS (F ; R) can be written now as (F ; R; T (F )), abbreviated to (F ; R; T ). We de ne a many-sorted term rewriting system (MTRS) M to be a TRS with a compatible S-sorted signature, in which we only reduce well-sorted terms. This can be written as M = (F ; R; WT ). We can also restrict our terms to the well-sorted terms of some sort . Then we get the -sorted term rewriting system (F ; R; WT ). Note that rules of a sort di erent to can also play a role in the last system. This de nition of MTRS's and well-sorted rewriting is equivalent to the de nition of a many-sorted term rewriting system in [Zan91, 618], but the notations are quite di erent.

3.4 The Direct Sum for Many-sorted TRS We can generalize the disjoint union operator to S-sorted term rewriting systems. Now T1 and T2 are S1 - and S2 -sorted TRS's, with signatures F1 = (F1 ; V1; S1 ; arity; st; ar) and F2 = (F2 ; V2 ; S2; arity; st; ar). Again F1 and F2 have to be disjoint and V1 and V2 are assumed to be disjoint without loss of generality. However, we require no disjointness restriction on S1 and S2 . The functions st and ar on function symbols and variables are extended to the new domain in the same way as we extended arity in section 2.4. In this way we get the new signature F = (F1  F2 ; V1  V2 ; S1 [ S2 ; arity; st; ar) and the direct sum of two many-sorted TRS's T1  T2 = (F ; R ; T ). (We are speaking about unsorted rewriting.) The set of terms in the new TRS is T (F ) and the set of well-sorted terms is given by WT (F ). The following lemmas say that the (well-sorted) terms of F1 and F2 are (well-sorted) terms in F :

Lemma 3.7 WT (Fj )  WT (F) for j = 1; 2. Proof:

Let j = 1 (without loss of generality). With induction to the depth of t we show that if t 2 WT (F1 ) then t 2 WT (F ) too.

2

Lemma 3.8 T (Fj )  T (F ) for j = 1; 2. The proof of this lemma is analogous to the proof of the previous lemma. 16

Let S1 and S2 be the sets of sorts of F1 and F2 , respectively. If S1 \ S2 = ? then WT (F ) = WT (F1 ) [WT (F2 ), because we cannot make mixed terms without violating the sort restrictions.

Proposition 3.9 If T1 is an S1-sorted TRS and T2 is an S2-sorted TRS then T is an S1 [ S2 -sorted TRS. Proof: Every rule l ! r from R is in some Ri, with i 2 f1; 2g. Because Ti is Si -sorted, we know that l; r 2 WT (Fi ) and sort(l) = sort(r). Lemma 3.7 gives us that l; r 2 WT (F ). So the sort conditions are satis ed. Moreover, the signature of F1  F2 is S1 [ S2 -sorted. So T is an S1 [ S2 -sorted TRS. 2

The last proposition gives a justi cation of speaking about the direct sum of MTRS's. So we have: (F1 ; R1 ; WT )  (F2 ; R2 ; WT ) = (F ; R ; WT ) In section 2.4 we saw the de nition of modularity. A property is modular if it is resistant against taking the disjoint union. The same de nition can be used in the many-sorted case. It is not known if properties that are modular in the unsorted case, are modular in the many-sorted case too. The reverse is trivial, because an unsorted TRS can be seen as an many-sorted TRS with only one sort. Therefore a new notion is introduced: MS-modular. P is MS-modular i for all MTRS's M1 and M2 holds: P (M1  M2) () (P (M1 ) ^ P (M2 ))

3.5 Sort Elimination and Persistence We are interested in what happens with some property P of some MTRS, if it is regarded as a TRS. This can be seen as admitting sort clashes. The sort elimination-operator () transforms an MTRS into the corresponding TRS. It is de ned as a function: ((F ; R; WT )) = (F ; R; T ): If eliminating the sort restrictions has no in uence on the property P , then P is called persistent. More precisely, a property P on TRS's is called persistent if for all many-sorted TRS's M : P (M ) () P ((M )) This de nition of persistence is equivalent to the de nition of [Zan91, 619] in which an MTRS is transformed into a copied version with a new one-sorted signature. In [Zan91] a proof is given that persistent properties are modular. The same proof technique can be used for MS-modularity. This gives the following theorem: 17

Theorem 3.10 Persistent and component closed properties are MS-modular. Let P be a component closed and persistent property. Let M1 and M2 be two disjoint many-sorted term rewriting systems. The signatures are (F1 ; V1 ; S1 ; arity1 ; st1 ; ar1 ) and (F2 ; V2 ; S2 ; arity2 ; st2 ; ar2 ) respectively. We have to prove the MS-modularity of P , using the persistence of P . Therefore we construct a new MTRS M , which combines M1 and M2 . The distinction between the two MTRS's is kept in the sorts. The new signature F = (F; V ; S; arity; st; ar) with F = F 1  F2 V = V 1  V2 S = f((s; 1) j s 2 S1 g  f(s; 2) j s 2 S2 g arity1 (f ) if f 2 F1 arity(f ) = ( arity2 (f ) if f 2 F2 (st1 (f ); 1) if f 2 F1 [ V1 st(f ) = ( ( st2 (f ); 2) if f 2 F2 [ V2 (ar1 (f; i); 1) if f 2 F1 ar(f; i) = (ar2 (f; i); 2) if f 2 F2 The new set of rules R = R1  R2 . This gives a new MTRS M = (F ; R; WT ). The terms of M with a sort of the form (s; 1), correspond one-to-one with terms of M1 . Furthermore, the rewrite relations on these terms correspond one-to-one. The same holds with 1 replaced by 2. Because terms with a sort (s; 1) do never reduce to terms with a sort (s; 2) and vice versa (proposition 3.6), they are in di erent equivalence classes. Because P is component closed, we can conclude, that (P (M1 ) ^ P (M2 )) () P (M ): From the persistence of P , we conclude that P (M1  M2 ) () P ((M1  M2 )): But, because the function symbols of (M1  M2 ) and (M ) are equal, and also the rules are equal and nally they both have no sort restrictions, we can conclude that P ((M1  M2)) () P ((M )): Combining these three results, we get that (P (M1 ) ^ P (M2 )) () P (M1  M2 ) and hence P is MS-modular.

Proof:

2

3.6 De nition of the Problem The problem of this thesis is to establish the relationship between the following three features: 18

1. persistence 2. modularity for TRS's 3. modularity for MTRS's. In fact [Zan91] shows that (1) implies (2). With the same method we could prove in theorem 3.10 that (1) implies (3). Furthermore it is evident that (3) implies (2). The main theorem of this thesis is that (3) implies (1) (Theorem 4.23). The most interesting question is whether (2) implies (1), because in that case we know that persistence holds for the same properties that are proven modular. This problem is reduced to the remaining question whether (2) implies (3). One strategy to achieve this result would be to \type-check" the modularity-proofs. Another approach would give a general proof that modular properties are MS-modular. This seems rather dicult, because it is dicult (or perhaps impossible) to construct for any MTRS a corresponding TRS with an isomorphic reduction relation. But perhaps this question leads to a closer analysis of the exact restrictions of many-sorted rewriting, which would be a fruitful item for further research.

19

4 Simulation of Sort Elimination by the Direct Sum In this chapter we will prove that component closed properties P that are modular for MTRS's with a nite set of sorts are persistent. The structure of the proof is the same as we have seen in theorem 3.10. There we took two MTRS's M1 and M2 , we constructed a similar MTRS M , proved that the disjoint union of M1 and M2 corresponded to sort elimination in M and used persistence to establish MS-modularity. Now we take the reverse order, so the structure of the proof reads: 1. Given an MTRS M , construct a family of other MTRS's, indexed by the set of sorts: fMs j s 2 S g. 2. Prove that the reduction relation of M is equivalent to the reduction relation of Ms for any s 2 S . (section 4.1) 3. Prove that the reduction relation of (M ) is equivalent to the reducL tion relation of s2S Ms . (section 4.2) L 4. Use the modularity of P for MTRS's to observe that P ( s2S Ms ) () 8s 2 S : P (Ms ). Here the niteness of the set S is used. 5. Conclude that P (M ) () P ((M )) and hence P is persistent. (section 4.3)

4.1 Modi ed Copies of an MTRS Let M = (F ; R; WT ), with signature F = (F; V ; S; arity; st; ar) where S is a set of sorts. These objects are used in this whole section. Because we don't specify anything about them, they can be seen as universally quanti ed. In this section a construction is given for a family of MTRS's Ms (for any s 2 S ), starting from the general MTRS M . The idea is that each Ms is a modi ed copy of M , in which the function symbols are the same, but the sorts are di erent. The sorts are permuted in a consistent way. To de ne this construction, we use a family of functions s , for any s 2 S . These functions depend on the binary operators + and ? on sorts. These operators assume that we can nd a binary operator '+' on S in such a way that (S; +) forms an abelian group:

   

There is a 0 2 S such that s + 0 = s. Every element s 2 S has an inverse ?s such that s + (?s) = 0. The + operator is associative. The + operator is commutative. 20

We write s ? s0 instead of s + (?s0). If S is a nite set, we can identify it with f0; 1; : : : ; n ? 1g. Then the operators + (mod n) and ? (mod n) satisfy. If S is a countable set, we can identify it with ZZ and the usual 0; + and ? satisfy the conditions. We de ne for all s 2 S and for all f 2 F a new function symbol fs and also for any x 2 V a new variable xs . For any s 2 S , the sets of new function symbols ffs j f 2 F g are denoted by Fs . In the same way, the sets of variables fxs j x 2 Vg are denoted by Vs. We also de ne the functions sort, ar and arity on these sets. Let fs 2 Fs and xs 2 Vs, then arity(fs) = arity(f ) st(fs) = st(f ) + s st(xs ) = st(x) + s ar(fs ; i) = ar(f; i) + s for 1  i  arity(f ):

In this way we have obtained a new S-sorted signature, denoted by Fs :

Fs = (Fs; Vs; S; arity; st; ar):

Example 4 If we apply this construction to the signature of example 2, we get three new signatures (because there are three sorts).

F1 F2  1 ! 2 x1 : 1 a1 : 1 f 1 : 1  2 ! 0 x 2 : 2 a2 : 2 f 2 : 2  0 ! 1 !0 y 1 : 2 b1 : 2 g1 : 0 ! 1 y 2 : 0 b2 : 0 g2 : 1 ! 2 !0 z 1 : 0 c 1 : 0 h1 : 2 ! 1 z 2 : 1 c 2 : 1 h2 : 0 ! 2 . ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: . x0 : 0 y0 : 1 z0 : 2

a0 : 0 b0 : 1 c0 : 2

F0

f0 : 0 g0 : 2 h0 : 1

We de ne a mapping s : WT (F ) ! WT (Fs ) for any s 2 S : s (x) = xs s (f (t1 ; : : : ; tn )) = fs(s (t1 ); : : : ; s (tn ))

Now we can prove the following lemma:

Lemma 4.1 If t 2 WT (F ) then s(t) 2 WT (Fs) and sort(s(t)) = sort(t)+ s.

Proof: (induction on the depth of t) 2

Lemma 4.1 says us something about the functions s . The intuition behind them is that they assign a label s to all function symbols of some term. Because the sorts are changed in a consistent way, the result of labeling a well-sorted term is again well-sorted. Therefore the result is in WT (Fs ). This is proved in the rest of this section. 21

We hope that the functions s are bijections. Let us de ne a function ~s : WT (Fs ) ! WT (F ) and prove that this is the inverse of s : ~s (xs ) = x ~s (fs (t1 ; : : : ; tn )) = f (~s (t1 ); : : : ; ~s (tn )) Now we have:

Lemma 4.2 The functions ~s are the inverse functions of s. Hence s are bijections from WT (F ) ! WT (Fs ). Proof: Claim 1 8t 2 WT (F ) : ~s(s(t)) = t Claim 2 8t 2 WT (Fs) : s(~s (t)) = t Proof: Induction on depth(t) 2

We conclude from claim 1 and 2 that ~s = ?1 s and therefore s is bijective.

2

The functions s are extended to substitutions in a natural way. For a substitution  : V ! WT (F ) and for all x 2 V we de ne s ()(xs ) = s (x ):

This generalizes to a substition s on terms, as in subsection 2.1.

Lemma 4.3 The functions s : (V ! WT (F )) ! (Vs ! WT (Fs )) are

surjective.

Proof: Let  : Vs ! WT (Fs ) be an arbitrary substitution. Then we have to show that there exist a substitution  : V ! WT (F ) with s () = . From lemma 4.2 we know that s has an inverse. So we can de ne 8x 2 V :  (x) = ?1 s (xs ):

Then we have: s ()(xs ) = s (x ) by the de nition of s on substitutions  )) by the de nition of  = s (?1 ( x s s = xs with lemma 4.2

2

The de nition of these substitutions s is made in such a way that we can prove the following lemma:

Lemma 4.4 For all t 2 WT (F ), substitutions  and s 2 S holds: s(t ) = s (t)s ()

22

Proof: Induction on the depth of t.

2

This is a useful result, because we now know that if the left-hand side of some rule l ! r matches a term t, this matching can be translated to the labeled version s (t). This is used in lemma 4.5. We can use the functions s on well-sorted terms, to de ne new MTRS's Ms , starting from a general MTRS M . Let M = (F ; R; WT ), where F is an S-sorted signature, R is a set of rules with function symbols from F and WT (F ) are the well-sorted terms over F . Then we de ne: Rs = fs(l) ! s(r) j l ! r 2 Rg and Ms = (Fs ; Rs ; WT ): Note that from the signatures in M and Ms it turns out that the set WT in M is in fact WT (F ), while the set WT in Ms is WT (Fs ). Given an MTRS M we now have de ned for each s 2 S a new MTRS Ms . This corresponds with step 1 in the overview of the proof at the beginning of this chapter. Now we have to prove, that all these MTRS's are isomorphic. This is true, because only the names of the function symbols are changed, and their sorts are permuted in a consistent way. This is proved by the following lemmas:

Lemma 4.5 For t; t0 2 WT (F ) holds: if t !R t0 then s(t) !R;s s(t0 ) Proof: Induction on the structure of the proof that t !R t0: The induction step is trivial, so let us do the case of an elementary reduction: Let t = l and t0 = r for some rule l ! r 2 R then s (l ) = (s (l))s () by lemma 4.4 !R;s (s (r))s () because s(l) ! s(r) 2 Rs = s (r ) by lemma 4.4

2

Lemma 4.6 For t; t0 2 WT (F ) holds: if s(t) !R;s s(t0) then t !R t0. Proof: Let t = f (t1; : : : ; tn) (variables are normal forms and can't reduce). The proof is with induction on the structure of the proof that s (t) !R;s s (t0 ), but again we restrict to the base case, because the induction step is

trivial. Let s (t) = l and s (t0 ) = r with l ! r in Rs, then we know from the de nition of Rs that l = s (a) and r = s (b) for some rule a ! b 2 R. Then we have: s (t) = l = (s (a)) = (s (a))s () for some substitution  from lemma 4.3 = s (a ) by the de nition of () 23

Then t = a , because s is injective on terms (see lemma 4.2). In the same way we can get: t0 = b . Because a ! b 2 R, we have: t !R t0 .

2

The conclusion of this section can be summarized in the following proposition:

Proposition 4.7 There exist a bijection s : WT (F ) ! WT (Fs ) (lemma 4.2) with for all t; t0 2 WT (F ): t ! R t0

()

s (t) !R;s s(t0 )

(lemma 4.5 and 4.6).

So the rewrite relation of sorted rewriting in the MTRS M and the rewrite relation of sorted rewriting in some MTRS Ms are "isomorphic". This corresponds with step 2 of the overview of the proof at the beginning of this chapter.

4.2 Sort Elimination is taking a Direct Sum In this section we will see that unsorted rewriting in the TRS (M ) = (F ; R; T ) can be simulated with rewriting in the MTRS that is the direct sum of the MTRS's Ms of section 4.1. This corresponds to step 3 of the overview of the proof at the beginning of this chapter. We write F for L s2S (Fs ). In the same way we de ne V , R and M . We will de ne a function : S T (F ) ! WT (F ) taking a sort s and a term and returning a well-sorted term with its root symbol in Fs . The + and ? operators are the same as in section 4.1. The function : S  T (F ) ! WT (F ) is de ned mutually recursively: (s; x) = xs  (s; f (t1 ; : : : ; tn )) = fs (s1 ; t1 ); : : : ; (sn ; tn ) where si = s + ar(f; i) ? sort(ti ) The following intuition lies behind this function: It starts labeling a term at the root with label s. It goes on labeling the sons with label s, until a subterm with the wrong sort is reached. Then the label is changed in such a way, that the resulting term gets well-sorted. In this way the function symbols within the well-sorted islands of a term get the same label, while a transition between two well-sorted parts results in a transition of the labels. Let us prove that the range of is indeed WT (F ) (so the result is wellsorted):

Lemma 4.8 For all t 2 T (F ) and s 2 S holds: (s; t) 2 WT (F) and

sort( (s; t)) = sort(t) + s.

24

Proof: induction on depth(t) 2

We de ne the function on substitutions in the following way. For  : V !

T (F ) and for x 2 V :

(s; )(xs ) = (s + st(x) ? sort(x ); x ) They have the same goal as the substitution s () of section 4.1. Let us write (s; t) = s (t). Now we can prove indeed the following lemma:

Lemma 4.9 For all l 2 WT (F ) n V and substitutions  holds: (s; l ) = ((s; l)) (s;) .

Proof: (induction on depth(l)) If l 2 WT (F ) n V then

(s; l ) de nition of = fs( (s1 ; t1 ); : : : ; (sn ; tn )) where si = s + ar(f; i) ? sort(ti )

Claim 1 (si ; ti ) = ((s; ti )) (s;) for all 1  i  n. Proof: There are two possibilities: ti 2 V or not. In the second case we

can use the induction hypothesis, but in the rst case this is impossible. If ti = y 2 V then (si ; ti ) = (s + ar(f; i) ? sort(y ); y ) de nition of si = (s + st(y) ? sort(y ); y ) because l 2 WT (F ) = (ys ) (s;) by de nition of (s; ) = ((s; ti )) (s;) If ti 2= V then (si ; ti ) = (s + ar(f; i) ? sort(ti ); ti ) = (s + ar(f; i) ? sort(ti ); ti ) by proposition 3.2 = (s; ti ) because l 2 WT (F ) ( s; ) = ((s; ti )) ih

2

We resume the rst computation: (s; l ) =  see above ( s; ) ( s; ) = fs(((s; t1 )) ; : : : ; ((s; tn )) ) by the claim = ((s; f (t1 ; : : : ; tn ))) (s;) de nition 

2

When we want to reason about as a function of its second argument (with its rst argument steady) we write s (t) = (s; t). Now we can formulate the following corollary: 25

Corollary 4.10

s



WT (F ) = s

Proof: We will show that for all t 2 WT (F ) : (s; t) = (s; t): If t = x then (s; x) = xs = (s; x). If t = f (t1; : : : ; tn ) then it follows from lemma 4.9 with  = id. 2

This corollary justi es the intuition that we had for the function : In a well-sorted term t are no sort clashes, so (s; t) consists of symbols with only one label.

Example 5 Let F be the union of the signatures F0 ; F1 and F2 from example 4. Then we can verify with the signature of this example on page 21 that the right hand side term is indeed well-sorted:

Figure 2: the action of 1 . ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: . We hope that using the de nition of (s; ) we can prove that for every reduction t !R t0 in M there exist a reduction (s; t) !R; (s; t0 ) in M . But this is only true if there is no collapse rule in R. In the case of a collapse rule it is possible that sort(t) 6= sort(t0 ) (see proposition 3.5), and the label of the reduct has to be changed. In fact things go wrong, because the previous lemma doesn't hold for variables. The label of the root symbol depends on the sort of it. Therefore we introduce a new function: . The only reason for its need is to compensate the in uence of the sorts on the label of the root symbol. We de ne: (s; t) = (s ? sort(t); t)

To give an intuition for the e ect of the function :

Lemma 4.11 For all terms t 2 T (F ) : (s; t) 2 WT (F ) and sort((s; t)) =

s.

26

Proof: (s; t) = (s ? sort(t); t). From lemma 4.8 we get that (s; t) is in WT (F ) and that sort((s; t)) = s ? sort(t) + sort(t). 2

So indeed we see that the sort of (s; t) does not depend on the sort of t. De ning s(t) = (s; t), we can see the function s as: s : T (F ) ! WT s (F ):

We can prove for the domain of  (all terms, including variables):

Lemma 4.12 For any l 2 WT (F ),  : V ! T (F ) and s 2 S holds: (s; l ) = ((p; l)) (p;) with p = s ? st(l) Proof: Let p = s ? st(l). There are two cases: If l = y 2 V then we have:

((p; l)) (p;) = (ys?st(y) ) (s?st(y);) de nition of  on variables   = (s ? st(y) + st(y) ? sort(y ); y ) by the de nition of (s; ) = (s ? sort(y ); y ) = (s; l ) If l 2= V we use lemma 4.9: (s; l ) = (s ? sort(l ); l ) de nition  = ((s ? sort(l ); l)) (s?sort(l );) see lemma 4.9 = ((p; l)) (p;) by prop. 3.2 and because p = s ? sort(l)

2

~: The following lemmas show that s has an inverse ?1 s . Let us de ne  WT (F ) ! T (F ) : ~(xs ) = x ~(fs (t1 ; : : : ; tn )) = f (~(t1 ); : : : ; ~(tn ))

Lemma 4.13 For all t 2 T (F ) : ~(s(t)) = t Proof: Induction on depth(t). 2

The function ~ can not be the inverse of s, because it has the wrong domain. But now we restrict the domain and we de ne for any s 2 S a function ~s : WT s(F ) ! T (F ) by: ~s = ~  WT s(F ):

Now we can prove: 27

Lemma 4.14 The function ~s is the inverse of s for all s 2 S . Proof: Claim 1 For all t 2 T (F ) : ~s(s(t)) = t. Proof: From lemma 4.11 we obtain sort(s(t)) = s and s(t) 2 WT (F ). So s (t) 2 dom(~s ). From lemma 4.13 we see that ~s (s (t)) = ~(s (t)) = t.

2

Claim 2 For all t 2 WT s(F ) : s(~s(t)) = t. Proof: (induction to depth(t)) If t 2 V then t = xp for some x 2 V and p 2 s, with sort(xp) = st(x) + p = s (because t 2 WT s (F )). Therefore p = s ? st(x).

But then we have t = s(x). Now we can prove: s(~s (t)) = s(~s (s (x))) see above = s(x) from claim 1 = t see above If t = fp(t1 ; : : : ; tn ) then p = s ? st(f ), because sort(t) = st(f )+ p = s. Furthermore, because t is well-sorted, we have that ar(fp ; i) = sort(ti ). With these ingredients, we make the following derivation: s(~s (fp(t1 ; : : : ; tn ))) = s(~(fp(t1 ; : : : ; tn ))) because sort(t) = s = s(f (~(t1 ); : : : ; ~(tn ))) de nition of ~ = s(f (~(s1 ; t1 ); : : : ; ~(sn ; tn ))) for si = sort(ti ) de nition of ~s = (p; f (~(s1 ; t1 ); : : : ; ~(sn ; tn ))) de nition of s = fp( (p1 ; ~(s1 ; t1 )); : : : ; (pn ; ~(sn ; tn ))) for pi = p + ar(f; i) ? sort(~(si ; ti )) de nition of s = fp(t1 ; : : : ; tn ); because: (pi ; ~(si ; ti )) = (pi + sort(~(si ; ti )); ~(si ; ti )) = (si ; ~(si ; ti )) because pi + sort(~(si ; ti )) = p + ar(f; i) = ar(fp; i) = sort(ti ) = si = ti by induction hypothesis

2

These two claims prove that ~s = ?1 s .

2

The following lemma says us more about the functions 28

s

on substitutions:

s are surjective from substitutions V ! T (F ) onto the set of well-sorted substitutions in Vs ! WT (F ).

Lemma 4.15 The functions

Let  : Vs ! WT (F ) be a well-sorted substitution. De ne x = ~(xs ). We have: sort(xs ) = st(xs ) = st(x) + s, and xs 2 WT (F ). Now we can derive for any x 2 V : s ()(x) = (s + st(x) ? sort(x ); x ) de nition of s ().  = (s + st(x); x ) de nition of s. de nition of x . = (s + st(x); ~(xs )) = (s + st(x); ?1 (sort(xs ); xs )) de nition of ?1 s . = xs by lemma 4.14. So we can nd a  : V ! T (F ) such that s () = , for any well-sorted substitution  : Vs ! WT (F ), and hence s is surjective onto these well-

Proof:

sorted substitutions.

2

Now we can prove the goal lemmas of this section:

Lemma 4.16 For t; t0 2 T (F ) holds: if t !R t0 then (s; t) !R; (s; t0) Proof: (induction to the structure of the proof that t !R t0) If t = l and t0 = r for some substitution  : V ! T (F ) and rule l ! r 2 R: Let p = sort(l) = sort(r), then: (s; l ) = ((p; l)) (p;) by lemma 4.12 !R; ((p; r)) (p;) because (p; l) ! (p; r) 2 R for all k = (s; r ) by lemma 4.12

If f (t1; : : : ; tj ; : : : ; tn) !R f (t1; : : : ; t0j ; : : : ; tn), with u = tj !R

t0j = u0

then we have: (s; f (t1 ; ::; u; ::; tn )) = (s ? st(f ); f (t1 ; ::; u; ::; tn ))  by the de nition of  = fs?st(f ) (s1 ; t1 ); : : : ; (sj ; u); : : : ; (sn ; tn ) where si = s ? st(f ) + ar(f; i) ? sort(ti ) by the de nition of

Claim 1 Let s0j = s ? st(f ) + ar(f; i) ? sort(u0 ). Then (sj ; u) !R; (s0j ; u0 ).

Proof: This can be proved straightforward:

(sj ; u) (s ? st(f ) + ar(f; j ) ? sort(u); u) (s ? st(f ) + ar(f; j ); u) by the de nition of  !R; (s ? st(f ) + ar(f; j ); u0 ) ih 0 0 = (s ? st(f ) + ar(f; i) ? sort(u ); u ) by the de nition of  = (s0j ; u0 ) = =

2

29

2

Resuming the rst computation we get:   fs?st(f )  (s1 ; t1 ); : : : ; (sj ; u); : : : ; (sn ; tn )  !R; fs?st(f ) (s1 ; t1 ); : : : ; (s0j ; u0 ); : : : ; (sn ; tn) where s0j = s ? st(f ) + ar(f; i) ? sort(u0 ) see the claim = (s ? st(f ); f (t1 ; : : : ; u0 ; : : : ; tn )) by the de nition of 0 = (s; f (t1 ; : : : ; u ; : : : ; tn )) by the de nition of 

Lemma 4.17 For all t; t0 2 T (F ): If (s; t) !R; (s; t0 ) then t !R t0. Proof:

The proof is analogous to the proof of lemma 4.6: Induction to the length of the derivation of the reduction step in R . basis If (s; t) = l and (s; t0 ) = r for some l ! r 2 R, then l = (p; a) and r = (p; b) with a ! b 2 R. and some p 2 S (by the de nition of R). Then s

= sort((s; t)) by lemma 4.11 = sort(l ) = sort(l) by proposition 3.2 = sort((p; a)) = sort(a) + p by lemma 4.1 So p = s ? sort(a). Furthermore sort(l ) = sort(r ) by proposition 3.6 and   Var(l) is well-sorted by proposition 3.3. Now we have: (s; t) = ((p; a)) = ((p; a)) (p;) by lemma 4.15 for some  = (s; a ) by lemma 4.12 Because s is an injective function, we can conclude that t = a . In the same way we obtain that t0 = b . But then we have that t !R t0 , because a ! b 2 R. induction step If:  (s; t) = k = fp(k1 ; ::; kj ; ::; kn ),  (s; t0) = k0 = fp(k1 ; ::; kj0 ; ::; kn ) and



kj !R; kj0

From proposition 3.6 we obtain that sort(kj ) = sort(kj0 ). De ne si = sort(ki ), then sort(kj0 ) = sj . From induction hypothesis, we get that ?1 (sj ; kj ) !R ?1 (sj ; kj0 ). Now we can derive:

30

t

2

= ?1 (s; k) = f (?1 (s1 ; k1 ); ::; ?1 (sj ; kj ); ::; ?1 (sn ; kn )) where si = sort(ki ) !R f (?1(s1 ; k1 ); ::; ?1 (sj ; kj0 ); ::; ?1 (sn; kn )) = ?1 (s; f (k1 ; ::; kj0 ; ::; kn )) = ?1 (s; k0 ) = t0

see lemma 4.14 via the de nition of ~. by induction hypothesis again via ~

Now we are ready to formulate that S-sorted rewriting in the direct sum M is "isomorphic" with rewriting in the TRS M (step 3 of the overview at the beginning of this chapter):

Proposition 4.18 There exist a bijection s : T (F ) ! WT s(F) (lemma 4.14) with for all t; t0 2 T (F ): t !R t0 () s (t) !R; s(t0 ) (lemma 4.16 and 4.17).

4.3 Persistence follows from Many-sorted Modularity In the previous two sections we saw isomorphisms between rewrite relations of term rewriting systems. In this section we will make conclusions about the properties of these systems. From proposition 4.7 and 4.18 we got the following equivalences:

8t; t0 2 WT (F ) : t !R t0 () s(t) !R;s s(t0 ) 8t; t0 2 T (F ) : t !(R) t0 () s(t) !R; s(t0 ): Here s : WT (F ) ! WT (Fs ) and s : T (F ) ! WT s (F ) are bijections (lemma 4.2 and 4.14).

Proposition 4.19 For reduction properties P on term rewriting systems and s 2 S holds: P (M ) () P (Ms ). Proof: From the observations above, we can conclude immediately that the rewrite relations !R and !R;s have the same properties. So the term rewriting systems M and Ms share the properties that can be de ned for abstract reduction systems (see section 2.3.1). These are precisely the reduction properties. 2

Examples of reduction properties are CR, SN, WCR, WN, UN, UN! and NF. 31

Proposition 4.20 For component closed reduction properties P on term L rewriting systems holds: P ((M )) () P ( s2S Ms ). Proof: The rewrite relation in the MTRS (F R WT ) consists of the union of the rewrite relations for each (F R WT s ). Because terms of ;

;

;

;

sort don't rewrite to terms of other sorts (proposition 3.6), the equivalence classes of the former are the union of the equivalence classes of the latter. Because the property P is component closed, we have for any 2 that P (F R WT ) () P (F R WT s ). The latter is equivalent to P (F R T ) due to the similarity in the rewrite relations shown in proposition 4.18. s

s

;

;

;

;

S

;

;

2

Proposition 4.21 Let be an S-sorted MTRS, with a nite set of sorts and let P beLan MS-modular reduction property. then (8 2 : ). P ( )) () P ( M

S

s

M

S

s2S Ms

s

Proof: Because taking the disjoint union is associative and the disjoint union of two MTRS's is an MTRS, we can apply the modularity of P j j S

times. Here the niteness of is used. S

2

Let P be a component closed property and let be an S-sorted MTRS. Furthermore assume that P is an MS-modular property for MTRS's. Then we have achieved: M

P holds for . () (by proposition 4.19). P holds for each s. () (by proposition 4.21 and MS-modularity). P holds for  . () (because P is component closed). P holds for  , restricted to sort . () (by lemma 4.20). P holds for ( ). M

M

M

M

s

M

So we have proved that P holds for a many-sorted MTRS if and only if P holds for the unsorted version of it, under the assumption of MS-modularity and component closedness of P and the niteness of . In other words: S

Theorem 4.22 A component closed and MS-modular property is persistent for nitely-sorted term rewriting systems.

Together with the result of theorem 3.10 we get: 32

Theorem 4.23 For component closed properties P of many-sorted term rewriting systems with nitely many sorts holds: P is persistent if and only if P is MS-Modular.

4.4 Further Remarks In this section two items are discussed: The generalization of the result of theorem 4.23 to an in nite set of sorts and the generalization to some syntactic properties. Theorem 4.23 only holds for a nite set of sorts. This restriction was used in lemma 4.21 only, to establish modularity for the direct sum of more than two MTRS's. If the number of sorts is nite, we can generalize the modularity for the direct sum of two MTRS's to the direct sum Ls2S s by induction to the number of sorts. Note that the disjoint union is associative. But what about the direct sum of in nitely many MTRS's? Note that in every reduction sequence, only function symbols from nitely many MTRS's play a role. The reason is, that all the rules originates from exactly one MTRS, so we can't introduce new symbols. So if we restrict to properties that are not only component closed, but also reduction tree closed we can generalize the modularity result to the direct sum of an in nite set of terms. It is easy to see that SN, WN, CR, WCR and UN! only depend on the reduction trees of all terms. But the property UN depends on a whole equivalence class. And it is possible to have an equivalence class in which in nitely many MTRS's play a role, as the following example shows: Example 6 Let i = i( ) ! for every 2 . Then in the direct sum Li2 i all i ( ) are convertible, because i ( ) ! j ( ) is a conversion in this direct sum. M

M

IN

.

M

f

f

x

x

i

b

IN

f

b

b

f

b

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

.

Lemmas 4.7 and 4.18 don't depend on the niteness of the set of sorts, but they depend on the existence of the operators + and ? with the right properties. This leads to the following theorem:

Theorem 4.24 Let S be a set of sorts, for which and a binary operator + can be found in such a way that (S; +) forms an abelian group. Let P be a reduction tree closed property on MTRS's, that is MS-modular. Then P is persistent for S-sorted MTRS's. Until now, we have only thought about reduction properties. But from the construction of the MTRS's s (making copies) it is clear that the syntactic properties that are de ned in section 2.3.2 are maintained. Also, taking the direct sum does not disturb these properties. So the original MTRS has a collapse rule if and only if all MTRS's s have a collapse rule. The latter M

M

M

33

holds if and only if the direct sum of these MTRS's has a collapse rule. The same holds for a non-left-linear rule or a duplicating rule and for other well-known syntactic properties as overlapping redexes and non-ambiguous rules. So we can extend our result with: If P is a modular, component closed property for collapse free (left-linear, duplicate free, etc) many-sorted term rewriting systems, then it is persistent for collapse free (left-linear, duplicate free, etc) many-sorted term rewriting systems. This is a useful generalization, because the modularity of strong normalization is restricted to combinations of collapse free and duplicate free TRS's [Rus87].

34

5 Conclusions and Questions In section 3.4 a convenient de nition of the direct sum of many-sorted term rewriting systems has been given. With this de nition the direct sum of two MTRS's yields an MTRS again. This led to the de nition of modularity for MTRS's, MS-modularity for short, in section 3.4. In section 4.3 and 3.5 we proved that for component closed properties and many-sorted term rewriting systems with a nite set of Sorts, persistence is equivalent to modularity. We have also seen that we can generalize to in nite sets of sorts, if we restrict to reduction tree closed properties. Also some important syntactic properties are maintained by the constructions of the proof, so we are able to restrict to left-linear, collapse free or duplicate free MTRS's. Proving that con uence is a persistent property, only depends on checking Toyama's proof of the modularity of con uence in [KMTdV91]. As a conjecture we state that con uence is a persistent property of many-sorted term rewriting systems. This statement gives a strong proof technique for the con uence of concrete TRS's. The following TRS, due to Toyama, can be found in [Zan91, 629]:

8 >> ( < = > ( ( )( :> ( ( ) a

R

x; y f

b

x

f

x ; x

b g

x ; x

) ) ) )

! ! ! !

a g

( ( ) ( )) ( ) ( ( )) ( ( )) f

x ; f

x

x

b x; f

x

b x; g

x

Proving the con uence of this TRS is not easy, but we can assign the following compatible f1,2,3g-sorted signature to it: f g a b

: : : :

1!1 1!1 11 !2 11 !3

The new MTRS is provably con uent, with standard techniques: Much fewer terms have to be taken into account. With the persistence of con uence, we obtain that the original TRS is con uent. There are some remaining questions. One of them is whether properties are modular for TRS's if and only if they are MS-modular for MTRS's. This can be answered by \type-checking" the existing modularity proofs, but perhaps a more general proof can be given. The notion of a well-sorted substitution could be used to give a de nition of well-sorted rewriting, without using unsorted rewriting. Another issue is, whether the notion of persistency can be generalized for order-sorted term rewriting systems [GJM85]. A related question is if the proof as given in this thesis could be generalized for the order-sorted case. R

35

References [GJM85]

J.A. Goguen, J.-P. Jouannaud, and J. Meseguer. Operational semantics of order-sorted algebra. In Proceedings of the 12th International Colloquium on Automata, Languages and Programming, volume 194 of Lecture Notes in Computer Science, pages 221{231, Nafplion, 1985. [KK88] M. Kurihara and I. Kaji. Modular term rewriting systems: Termination, con uence and strategies. Report, Hokkaido University, Sapporo, 1988. [KMTdV91] J.W. Klop, A. Middeldorp, Y. Toyama, and R. de Vrijer. A simpli ed proof of toyama's theorem. Technical Report IR-270, Vrije Universiteit Amsterdam, dec 1991. [Mid89] A. Middeldorp. A sucient condition for the termination of the direct sum of term rewriting systems. In Proceedings of the 4th IEEE Symposium on Logic in Computer Science, pages 396{401, Paci c Grove, 1989. [Mid90] A. Middeldorp. Modular Properties of Term Rewriting Systems. PhD thesis, Vrije Universiteit te Amsterdam, Amsterdam, 1990. [Rus87] M. Rusinowitch. On termination of the direct sum of term rewriting systems. Information Processing Letters, 26:65{70, 1987. [SS87] M. Schmidt-Schau. Computational Aspects of an OrderSorted Logic with Term Declarations, volume 395 of Lecture Notes in Arti cial Intelligence. Springer-Verlag, 1987. subseries of LNCS. [Toy87a] Y. Toyama. Counterexamples to termination for the direct sum of term rewriting systems. Information Processing Letters, 25:141{143, 1987. [Toy87b] Y. Toyama. On the Church-Rosser property for the direct sum of term rewriting systems. Journal of the ACM, 34(1):128{143, 1987. [Zan91] H. Zantema. Termination of term rewriting: from many-sorted to one-sorted. In J. van Leeuwen, editor, Computing Science in The Netherlands, volume 2 of CSN proceedings, pages 617{629. SION, CSN'91, nov 1991. [Zan92] H. Zantema. Type removal in term rewriting. In M. Rusinowitch and J.L. Remy, editors, CTRS 92 (Extended Abstracts), 36

Conditional Term Rewriting Systems, pages 86{90, Pont-aMousson, july 1992. Centre de Recherche en Informatique de Nancy. To be published in the Springer-Verlag collection.

37