traditional (syntactic) rules for data refinement of programs can be derived ... !Bool is applied to a single state, the result ... following shunting properties show how these two constructs are related: P ; Q R .... tional updates), the second one being a multiple assignment statement. A ... di erent classes of predicate transformers.
Encoding, Decoding, and Data Re nement Ralph-Johan Back
Abo Akademi University, Department of Computer Science Lemminkaisenkatu 14 A, FIN-20520 Turku, Finland email: backrj@abo.
Joakim von Wright
Abo Akademi University, Department of Computer Science Lemminkaisenkatu 14 A, FIN-20520 Turku, Finland email: jwright@abo.
Turku Centre for Computer Science TUCS Technical Report No 236 February 1999 ISBN 952-12-0372-2 ISSN 1239-1891
Abstract Data re nement is the systematic replacement of a data structure with another one in program development. Data re nement between program statements can on an abstract level be described as a commutativity property where the abstraction relationship between the data structures involved is represented by an abstract statement (a decoding). We generalise the traditional notion of data re nement by de ning an encoding operator that describes the least (most abstract) data re nement with respect to a given abstraction. We investigate the categorical and algebraic properties of encoding and describe a number of special cases, which include traditional notions of data re nement. The dual operator of encoding is decoding, which we investigate and give an intuitive interpretation to. Finally we show a number of applications of encoding and decoding.
TUCS Research Group
Programming Methodology Research Group
1 Introduction Data re nement is the systematic replacement of a data structure (abstract data) by another one (concrete data) in program development. Traditionally, data re nement has been formalised in terms of a data re nement relation which generalises the (algorithmic) re nement relation between programs. In this paper we consider data re nement as an operator rather than a relation. The encoding operator # is de ned so that S # D is the most general (least re ned) data re nement of statement S with respect to an abstraction statement D (an abstraction statement models the relationship between the concrete and the abstract state space). Traditional notions of data re nement (forward, functional and backward) are special cases of encoding, where the abstraction statement satis es some additional conditions. Thus, traditional (syntactic) rules for data re nement of programs can be derived from the general algebraic properties of the encoding operator. We work within the re nement calculus framework described in [7]. This is a formalisation using classical strongly typed higher-order logic of the traditional re nement calculus [1, 2, 16] which was based on the weakest precondition semantics of programs [9]. The basis of the re nement calculus is the predicate transformer hierarchy, where program are modeled as predicate transformers, which in turn are built from predicates, functions and relations using homomorphic operators (statement constructors). We add the encoding operator to this hierarchy and investigate its algebraic properties and its use in calculational data re nement. Using Galois connections we nd that under certain restrictions there exists a dual decoding operator " which allows us to calculate the least general (most re ned) abstraction S " D of a given (concrete) statement S with respect to abstraction statement D. We also investigate the properties of this operator and illustrate its use. The idea of describing data re nement in terms of general abstraction statements is not new [11, 20], but our notion of least data re nement and its dual have to our knowledge not been investigated in a predicate transformer framework before. Data re nement has been the subject of many detailed studies, but mostly in a relational framework (for an overview of dierent methods and theories of data re nement, see [19]. The relational framework is less expressive than predicate transformers and does now allow a uniform handling of data re nement in the form described here. We use higher-order logic as the underlying logic of our investigation, with lambda notation for functions (e.g., (x x = 1)) and an in x dot for function application (f: x). Quanti ers are given low precedence and their scope is delimited by parentheses. We use standard notation for logical connectives and F; T for boolean falsity and truth. Proofs are written in a 1
calculational style, with indented subderivations [4]. The paper is organized as follows. Section 2 gives a short overview of the basic concepts related to the predicate transformer hierarchy and its use in the re nement calculus, and some other background material. Then, in Section 3, we introduce the notion of encoding as part of the data re nement method, and study the algebraic properties of encoding. This study makes only weak assumption about the abstraction statement D (assuming only monotonicity). In Section 4 we study the properties of encoding when we consider the kinds of abstraction statements that would usually be used in program derivations. In particular, we study encoding in connection with forward and backward data re nement. We show that encodings in such cases can be calculated explicitly. In Section 5 we consider the dual decoding operation, study its properties, and show how it is related to encoding. In Section 6, we look at encoding and decoding of recursive statements in more detail. Finally, in Section 7, we give some examples of how the algebraic results derived in the previous sections can be translated into more concrete properties about program statements expressed in ordinary programming language syntax.
2 Background We begin with a brief overview of the basic concepts related to the predicate transformer hierarchy.
2.1 States, predicates and relations
A state space can be any type . In general, we do not assume that this type has any speci c internal structure. An element : is called a state. A state function f : ! ? maps states to states (note that the two state spaces can be dierent). Function composition is written forward (so (f ;g): = g: (f: )) and the unit of composition is the identity function id. A state predicate is a boolean state function p : ! Bool. Since p can be identi ed with the set of states where p: T (i.e., where p holds), we use set notation for meet (intersection) and join (union) of predicates. The unit elements of meet and join are the everywhere true predicate true and the everywhere false predicate false, respectively. The complement (negation) of predicate p is :p. Predicates are ordered by the subset (stronger-than) ordering . A state transformer is a function f : ! that changes the state in a deterministic way. A state relation is a binary boolean state function 2
R : ! ? ! Bool. We think of a relation as a nondeterministic state transformation, transforming an initial state to one of the states such that R: : holds. When a relation R : ! ? ! Bool is applied to a single state, the result R: is a set of states, i.e., a predicate. Extending this partial application of a relation to a set of states, we have a notion of image and inverse image of a predicate under relation: im: R: p =^ f j 9 2 p 2 R: g im: R: q =^ f j 8 2 R: 2 q g
(image ) (inverse image )
Equivalently, we can write im: R: p: (9 p: ^ R: : ) im: R: q: (8 R: : ) q: )
which clearly shows the duality between the two operators. Meet (intersection) and join (union) of relations are de ned by pointwise extension from predicates, e.g., (Q \ R): =^ Q: \ R: Subset ordering on relations is the extension of the subset ordering on predicates,
Q R =^ (8 Q: R: ) Composition of relations is also de ned as usual: (Q ; R): : =^ (9 Q: : ^ R: : )
(composition )
The unit elements of composition, meet and join are the identity relation Id, the everywhere true relation True and the everywhere false relation False, respectively. In addition, we de ne a quotient operator for relations: (QnR): : =^ (8 Q: : ) R: : )
(quotient )
The quotient operation (QnR) is similar to the composition Q ; R. We have that (Q ; R): : holds when there exists some intermediate state reachable from by Q such that R: : , while (QnR): : holds when for every 3
intermediate state reachable from by Q, we have that R: : holds. The following shunting properties show how these two constructs are related:
P ; Q R P RnQ?1 P ; Q R Q P ?1nR
(shunt )
Predicates and functions can easily be coerced into relations, by the following de nitions:
jpj: : =^ p: ^ = jf j: : =^ = f:
(test relation ) (mapping relation )
Furthermore, the inverse of a function is a relation:
f ?1: : =^ = f:
(inverse of function )
2.2 Program variable notation
Although most of our investigation is carried out on the algebraic level, we will need a syntactic level with program variables in the examples that illustrate the concepts and results. For this, we need to show how to model expressions and assignments in program variables. For this purpose, we assume that in practice the state space is really a product, = 1 : : : n. The projection function i : ! i , 1 i n, gives the value of the ith component. A declaration like var y : Bool; x : Nat; z : Nat de nes a state space and at the same time introduces more convenient names for the projection functions. We are only interested in the set of program variables, not in the order in which they are listed. Hence, we assume that any declaration of program variables is equivalent to a declaration where the variables are listed in some standard order (e.g., lexicographical order). In this case, the equivalent declaration with standard ordering would be var x : Nat; y : Bool; z : Nat This declaration denotes the state space = Bool Nat Nat. The program variables are identi ed with the corresponding projection functions, so that x = 1 , y = 2 and z = 3 . An expression in program variables like x (z + 1) or a boolean expression like y ) x = z is evaluated in a state by rst applying the program 4
variables to the state. Computing the value of the rst expressions in the state = (a; b; c), we have that (x (z + 1)): = x: (z: + 1) = 1: (3 : + 1) = a (c + 1) A predicate can be described by a boolean expression over the program variable(s), e.g., x + z > 0. A relation can also be described using program variables. For instance, the relation x < x0 + z holds between states and 0 if x: < x: 0 + z: . Thus, we write x0 to denote that the program variable should be applied to the nal state rather than the initial state. We use the assignment notation for relations in program statements. The basic form is (x := x0 j b)
(relational assignment )
where b is a boolean expression over primed and unprimed program variables. We have that (x := x0 j b): : 0 holds if b: : 0 holds and state components other than x have the same value in and 0 (i.e., if x = i , then 1 : = 1: 0; : : : ; i?1 : = i?1 : 0; i+1: = i+1 : 0; : : : ; n: = n: 0). This generalizes in the obvious way the situation where x is actually a list of program variables. For example, (x := x0 j x0 > x) is a relation that increases the value of x by an arbitrary amount, but does not change y or z. Consider two state spaces, determined by var x; z and 0 determined by var y; z, where x, y and z are lists of program variables (with type indications). Here the program variables z are common to both state spaces. We generalize the assignment notation to account for relations over dierent state spaces, to the form (y=x := y0 j b)
(generalised relational assignment )
This is a relation from initial state space with program variables x; z to nal state space 0 with program variables y; z. We have that (y=x := y0 j b): : 0 holds if b: : 0 holds and z: = z: 0 (note that if x and y have dierent types then the two occurrences of z here denote dierently typed projection functions). The form y=x shows that the program variables y are added to the state space and that the program variables x are deleted from the state space. This avoids the need to keep explicit track of the state spaces involved in a program statement, even when we are changing the state space in the middle of a statement. As an example, the assignment (y=x := y0 j x = y0 + 1) has the following intuition: the program variable x is replaced by program variable y which gets a new value y0 related to the old value of x by x = y0 + 1. 5
Note also that the simple relational assignment above is a special case of this more general relational assignment, where the same variable is removed and introduced: (x := x0 j b) = (x=x := x0 j b) The ordinary (functional, or deterministic) assignment (x := e) is also a special case, which we can de ne by:
jx := ej = (x := x0 j x0 = e)
(functional assignment )
Thus (x := e) denotes a state transformer that changes the value of x to the value of e in the initial state. The generalisation to a deterministic assignment (y=x := e) that replaces variables is straightforward:
jy=x := ej = (y=x := y0 j y0 = e) (generalised functional assignment ) In Appendix B we give a number of rules for manipulating assignments. This way of modelling program variables in the re nement calculus is quite simple and straightforward. A disadvantage is that the number of program variables in a state is always xed. Thus, var x : Nat; y : Nat denotes a state space with two components, whereas var x : Nat; y : Nat; z : Bool is a dierent state space with three components. The transformation between these two state spaces has to be done explicitly, with coercion functions. In [7] we present a more general and exible approach to modelling program variables, which avoids the need for coercion functions and is therefore better suited for programming logics. For the purposes of this paper, that added
exibility is not, however, needed, so we will use the simple model here instead.
2.3 Predicate transformers
A predicate transformer is a function that maps predicates to predicates. We use the abbreviation 7! ? for the predicate transformer type (? ! Bool) ! ( ! Bool). The intuition is that S : 7! ? maps postconditions over state space ? to preconditions over state space . Predicate transformers have a weakest precondition interpretation. Thus, S: q is a predicate that characterises those initial states from which execution of S is guaranteed to terminate in a nal state where postcondition q holds. The re nement ordering on 7! ? is de ned by pointwise extension of the ordering on predicates:
S v S 0 =^ (8p S: p S 0: p) 6
(re nement )
Programs are modeled by monotonic predicate transformers 7!m ?. These form a complete lattice with respect to the re nement ordering. In the rest of this paper we will assume monotonicity without usually mentioning this fact explicitly. The algebraic structure of predicate transformers gives us three basic program operators (composition, meet and join) and three corresponding basic constants (the unit element skip, the top element magic and the bottom element abort). In addition, we will work with xpoints constructs (to be explained later) and with four homomorphic embedding operators. Two of them embed predicates in predicate transformers and the other two embed relations:
fpg: q =^ p \ q [p]: q =^ :p [ q fRg: q =^ im: R?1: q
(assertion ) (guard ) (angelic update ) (demonic update )
[R]: q =^ im: R: q
for predicates p and relations R. Using the de nitions of images, the two update statements are characterised as follows: fRg: q: R: \ q = ; and [R]: q: R: q A special case of demonic update is the chaotic update chaos = [True]. Note also that assertions and guards are special cases of updates: fpg = f 0 p: ^ 0 = g and [p] = [ 0 p: ^ 0 = ] We also introduce an embedding of state transformers:
hf i: q =^ f ; q
(functional update )
We will not consider the functional update to be a primitive notion here, since it is a special case of both the other updates: fjf jg = hf i = [jf j]. However, we will occasionally comment on how results for the angelic and demonic updates carry over to functional updates. There is a strong duality built into the constructs described above. We de ne the duality operator on predicate transformers by
S : q =^ :S: (:q) Then fRg and [R] are duals while hf i is self-dual.
(dual )
In practice, we use assignments to express state changes. For instance, hx := x + zi ; hy; z := T; xi is a sequence of two assignment statements (functional updates), the second one being a multiple assignment statement. A 7
demonic assignment is a demonic update where the relation is an assignment, e.g., [x := x0 j x0 > x]. We can introduce more traditional program constructs using the basic statements. For instance, a conditional statement is de ned by if b then S1 else S2 =^ fbg ; S1 t f:bg ; S2
and a block with local variables is de ned by
begin var x := c ; S end =^ hx= := ci ; S ; h=xi
Here we rst introduce a new variable x that is initialized to the constant c (without removing any variables), then we execute S , and nally we remove x (without introducing any new variables). The well-known Knaster-Tarski theorem guarantees that every monotonic function f on a complete lattice has a least xpoint : f and a greatest xpoint : f . This allows us to de ne predicate transformers by recursion. For instance, a simple while loop is de ned by while b do S od =^ : (X if b then S ; X else skip ) The above should be sucient to indicate that the statements that we can construct from our basic predicate transformer constructs are very general, and allow us to model both speci cations as well as executable statements within the same framework. For more details, we refer to [7]. A number of useful algebraic properties can be proved about the basic predicate transformer constructs. As an example, the basic predicate transformer constructs are homomorphic with respect to composition (considering \ as the composition of predicates). Thus we have
fpg ; fqg = fp \ qg fQg ; fRg = fQ ; Rg hf i ; hgi = hf ; gi
[p] ; [q] = [p \ q] [Q] ; [R] = [Q ; R]
for arbitrary predicates p and q, relations Q and R and functions f and g. A more thorough treatment of the homomorphic properties of statements is also given in [7].
2.4 Homomorphic predicate transformers and normal forms
Predicate transformers can be classi ed according to basic homomorphism properties (in addition to monotonicity). We say that S is conjunctive if it 8
distributes over nonempty meets, i.e., if S: (\ i 2 I qi) = (\ i 2 I S: qi ) for arbitrary nonempty collections fqi j i 2 I g of predicates. Dually, S is disjunctive if it distributes over nonempty joins of predicates. Furthermore, S is strict if S: false = false and terminating if S: true = true. If a predicate transformer S that is both terminating and conjunctive (i.e., if S distributes over arbitrary meets) then it is said to be universally conjunctive. Dually, S is said to be universally disjunctive if it is both strict and disjunctive. It is well known that conjunctivity and disjunctivity each implies monotonicity. Furthermore, universal disjunctivity implies continuity and universal conjunctivity cocontinuity. The basic notation that we introduced above provides normal forms for dierent classes of predicate transformers. The following lemma summarises the normal forms that will be used in this paper. Lemma 1 Assume that S is a predicate transformer. (a) If S is monotonic then it can be written in the form fQg ; [R] for some relations Q and R. (b) If S is conjunctive then it can be written in the form fpg ; [R] for some predicate p and some relation R. (c) If S is universally conjunctive then it can be written in the form [R] for some relation R. (d) If S is universally disjunctive then it can be written in the form fRg for some relation R. (e) If S is conjunctive and universally disjunctive then it can be written in the form fpg ; hf i for some predicate p and some function f . This normal form theorem is proved in [20].
2.5 Galois connections and inverse statements
Assume that (A; v) and (B; v) are partially ordered sets and that functions f : A ! B and g : B ! A are given. We say that the pair (f; g) is a Galois connection if the following condition holds for all x and y:
f: x v y x v g: y
(Galois connection )
For example, the connection between the composition and quotient operations on relations show that ((X P ; X ); (Y P ?1nY )) is also a Galois 9
connections, as is the pair ((X X ; Q); (Y Y nQ?1)). As an example, we also show that the pair (im: R; im: R) is a Galois connection, for arbitrary relation R: im: R: p q
fde nitions of image and predicate orderingg (8 (9 R: : ^ p: ) ) q: ) fpredicate calculusg (8 R: : ^ p: ) q: ) fpredicate calculusg (8 p: ) (8 R: : ) q: )) fde nition of inverse relationg (8 p: ) (8 R?1 : : ) q: )) fde nitions of inverse image and predicate orderingg p im: R: q
When the functions f and g are monotonic, the notion of a Galois connection can be expressed algebraically on the level of the functions themselves:
f g v id
and id v g f When the partially ordered sets involved are complete lattices, we can say even more about Galois connections. Lemma 2 Assume that A and B are complete lattices. (a) If the pair of monotonic functions (f : A ! B; g : B ! A) is a Galois connection, then f is a (universal) join homomorphism and g is a (universal) meet homomorphism. (b) If g : B ! A is a meet homomorphism, then there exists a unique monotonic function f : A ! B such that (f; g) is a Galois connection. (c) If f : A ! B is a join homomorphism, then there exists a unique monotonic function g : B ! A such that (f; g) is a Galois connection. When (f; g) is a Galois connection, then f is called the left adjoint of g and g the right adjoint of f . The following shunting properties show how functions in a Galois connection can be manipulated algebraically. Lemma 3 Assume that the pair of monotonic functions (f; g) is a Galois connection. Then 10
(a) f ; h v h0 h v g ; h0 (b) h ; g v h0 h v h0 ; f for arbitrary monotonic functions h and h0 . Since statements are seen here as monotonic predicate transformers on a complete lattice of predicates, we can apply the theory of Galois connections directly to statements. From the normal form theorem (Lemma 1) we know that a Galois connection (S; T ) of statements must be of the form (fP g; [Q]) for some state relations P and Q. The following lemma makes this even clearer. Lemma 4 Let state relation R, predicate p and state function f be arbitrary. Then (a) fRg and [R?1] form a Galois connection, and (b) frg ; hf i and [f ?1] ; [r] form a Galois connection. Proof Since the pair (im: R; im: R) is a Galois connection the de nitions immediately give us that [R?1] is the inverse of fRg. Now (b) follows from this, by setting R = jrj ; jf j. 2 When a statement pair (S; T ) is a Galois connection we will refer to T as the inverse of S , denoted by S . Lemma 4 gives us a general rule for inverse statements. In program variable notation, these rules amount to the following: fx := x0 j bg = [x := x0 j b[x; x0 := x0 ; x]] fpg ; hx := ei = [x := x0 j x = e[x := x0 ]] ; [p] In addition to this, we will use the fact that S1 ; S2 = S2 ; S1 Inverse statements are investigated in more detail in [6]. A simple example can illustrate the idea of inversion. Consider the statement S = hx := x + 1i. The inverse S of this statement is hx := x + 1i = [x > 0] ; hx := x ? 1i Executing S ; S from an initial state where x = x0 leads to an intermediate state where x = x0 + 1 and a nal state where x = x0 , so S ; S = skip. On the other hand, S ; S is miraculous when executed in initial state where x = 0, but from any other initial state it terminates in the initial state, so S ; S w skip.
11
S ???!
x
D? ?
S 0 ???! 0
x ? ?D 0
Figure 1: Data re nement
3 Data re nement and encoding The basis for encoding is data re nement, which can on an abstract level be described as a commutativity property: we say that S is data re ned through D by S 0 (or that S is D-re ned by S 0), denoted S vD S 0, if the following condition holds:
D ; S v S0 ; D (data re nement ) Here D : ? 7! (the abstraction statement), S : 7! (the abstract statement) and S 0 : ? 7! ? (the concrete statement) are monotonic predicate
transformers. Data re nement can be illustrated by a subcommuting diagram where a path lower down is always a re nement of a path higher up with the same end points (see Figure 1).
3.1 Abstraction category
Data re nement gives us a category, where there is an arrow D (an abstraction) with source S 0 and target S exactly when S vD S 0 holds. Identity morphisms are of the form skip (note that vskip is ordinary algorithmic re nement v, which is re exive) and the composition of morphisms is sequential composition; if S vD S 0 and S 0 vD0 S 00, then D0 ; D ; S v f rst assumption, D monotonicg D0 ; S 0 ; D v fsecond assumptiong S 00 ; D0 ; D so S vD0 ;D S 00. This is illustrated in the diagram in Figure 2. It is well known that data re nement can be handled in category-theoretic terms [15]. However, we prefer to work in the simpler partial order framework where data re nement is expressed in terms of algorithmic re nement. THow this is done is shown in the next subsection. 12
D0 ; D
? D D0 00 0 S S S Figure 2: Abstraction category
3.2 Encoding
A typical situation in data re nement is that the abstract statement S and the abstraction D are given. The problem is then to nd a monotonic predicate transformer S 0 such that S vD S 0 holds. Furthermore, we are interested in making S 0 as small as possible (with respect to the re nement ordering), giving us maximal freedom in subsequent re nement steps. Or, to put it in another way, we are interested in the smallest D-re nement of S . Consider therefore D ; S v X ; D as an equation in unknown monotonic predicate transformer X . This equation obviously has a solution (set X = magic). Furthermore, if fSi j i 2 I g is a set of solutions, then (u i 2 I Si ) is monotonic and (u i 2 I Si) ; D = fdistributivityg (u i 2 I Si ; D) w fassumptiong (u i 2 I D ; S ) = fvacuous meetg D;S so there always exists a least solution, which is u fX 2 7!m j D ; S v X ; Dg. This is obviously the smallest D-re nement of S . We introduce the notation S # D for this solution, and call it the encoding of S with D. This means that S # D is characterised by the following two conditions:
S vD (S # D) S vD X ) S # D v X
(encode1) (encode2)
The following alternative characterisation of encoding is generally more useful: 13
Theorem 5 The encoding S # D is uniquely de ned by the following property:
S # D v S 0 S vD S 0 Proof First assume that conditions (encode1) and (encode2) hold. Then
(S # D) v S 0 ) fmonotonicityg (S # D) ; D v S 0 ; D ) fproperty (encode1), re nement is transitiveg D ; S v S0 ; D and (encode2) with X := S 0 gives implication in the opposite direction. Now assume that S # D v S 0 S vD S 0 for arbitrary S 0. Then
D ; S v (S # D) ; D fassumption with S 0 := S # Dg (S # D) v S # D
) fre nement is re exiveg T
and
D;S vX ;D
fassumption with S 0 := X g S #D v X and the proof is nished. 2
Assume that monotonic predicate transformer S : 7! and monotonic abstraction D : ? 7! are given. Then S # D has type ? 7! ?. We think of S # D as the translation of S to the state space ?, such that only the data representation is changed. Thus, a data re nement S vD S 0 can be decomposed so that the pure data re nement is \factored out", and followed by the \remaining" algorithmic re nement (i.e., the change in nondeterminism, miraculousness and termination that is not inherent in the data re nement)
S vD S # D v S 0 This is illustrated in Figure 3 where a horizontal arrow stands for algorithmic re nement (decoding with skip) and a vertical arrow for data re nement. 14
S x
D? ?
S # D ??? S 0 Figure 3: Factoring out data re nement
3.3 Homomorphism properties of encoding
Encoding is a binary operator on predicate transformers. We shall now investigate its homomorphism and other algebraic properties, in both arguments.
Theorem 6 Encoding is monotonic in its rst argument: S v S0 ) S # D v S0 # D Proof Assume that S1 v S2 holds. Then
S1 # D v S 2 # D
( fproperty (encode2)g D ; S1 v (S2 # D) ; D ( fassumptiong D ; S2 v (S2 # D) ; D fproperty (encode1)g T
2
From Theorem 8 below we get the following three encodings: skip # abort = abort skip # skip = skip skip # magic = chaos Since abort < chaos < skip (provided that the underlying state space has at least two elements) we see that encoding is neither monotonic nor antimonotonic in its second argument. Now let us consider how encoding distributes into basic statement constructs. Theorem 7 Encoding can be re ned to preserve the structure of its rst argument as follows: 15
(a) abort # D = abort, if D is strict, (b) skip # D v skip, (c) magic # D = magic, if D is strict and terminating, (d) fpg # D v fp0g D: p p0 , (e1) [p] # D v [p0 ] p0 D : p, if D is universally disjunctive, (e2) [p] # D v [p0 ] p0 D: p, if D is universally conjunctive, (f) (S1 ; S2 ) # D v (S1 # D) ; (S2 # D), (g) (u i 2 I Si) # D v (u i 2 I Si # D), and (h) (t i 2 I Si) # D = (t i 2 I Si # D), if D is universally disjunctive. Proof We show the proof of (d) to illustrate the general method used. The full proof is given in Appendix A. We have
fpg # D v fp0g fTheorem 5g D ; fpg v fp0g ; D fde nitionsg (8q D: (p \ q) p0 \ D: q) fmutual implicationg (8q D: (p \ q) p0 \ D: q) ) fspecialise q := pg D: p p0 \ D: p fgeneral lattice property x v x u y x v yg D: p p0 (8q D: (p \ q) p0 \ D: q) ( fgeneral rule S: (p \ q) S: p \ S: qg (8q D: p \ D: q p0 \ D: q) ( fmonotonicityg D: p p0 D: p p0 2
The restriction in Theorem 7 (h) may seem to indicate that we should in general require abstraction D to be universally disjunctive. However, as long 16
as we work within a framework of conjunctive predicate transformers (where the join operator is not included), this restriction need not be made. Theorem 7 illustrates three dierent levels of rules for encoding. The equality rules in (a), (c) and (h) have the strongest form. They tell us that abort and magic is each the least D-re nement of abort and magic, respectively. The equivalence rules in (d) and (e) are slightly weaker. For example, (d) tells us that fD: pg is the least structure-preserving D-re nement of fpg. However, this does not exclude the possibility that fpg # D is strictly less (with respect to the re nement ordering) than fD: pg. Finally, the rules in (b), (f) and (g) are even weaker, not excluding the possible existence of strictly smaller D-re nements that preserves the structure of the original statement.
3.4 The abstraction argument of encoding
The preceding theorems describe the homomorphism properties of encoding with respect to the rst argument. For completeness, it we also investigate the homomorphism properties in the second argument. Theorem 8 An encoding can be re ned to preserve the structure of its second argument, as follows: (a) S # abort = abort, (b) S # skip = S , (c) S # magic = chaos, (d) S # (D1 t D2) v (S # D1) t (S # D2) Proof We show the proof of (c) to indicate how the proofs are done. The full proof can be found in Appendix A.
S # magic v S 0
fTheorem 5g magic ; S v S 0 ; magic fde nitionsg (8q true S 0: true) fchaos is the least of all terminating predicate transformersg chaos v S 0 which shows that S # magic = chaos. 2 17
From Theorem 8 we see that encoding is very weakly homomorphic in its second argument. However, for a sequential composition of decodings we have an important result. Theorem 9 Encodings can be composed as follows: S # (D2 ; D1 ) v (S # D1) # D2 Proof
S # (D1 ; D2) v (S # D2) # D1
( fproperty (encode2)g D1 ; D2 ; S v ((S # D2 ) # D1) ; D1 ; D2 ( fproperty (encode1), monotonicityg D1 ; D2 ; S v D1 ; (S # D2) ; D2 ( fproperty (encode1), monotonicityg D1 ; D2 ; S v D1 ; D2 ; S fre exivityg T
2
The importance of Theorem 9 is illustrated as follows. Assume that we encode S with D1 and re ne this to S1 :
S # D 1 v S1
Now assume that we continue, encoding S1 with D2 and re ne this to S2 :
S1 # D2 v S2
Then
D2 ; D1 ; S
v fde nition of encodingg D2 ; (S # D1) ; D1 v fassumption, D1 monotonicg D2 ; S1 ; D1
v fde nition of encodingg (S1 # D1) ; D2 ; D1 v fassumptiong S2 ; D 2 ; D 1
18
and we can use Theorem 5 to conclude
S # (D2 ; D1 ) v S2 Theorem 9 is sucient to justify encoding in a stepwise manner. In Section 4.1 we shall see that in an important special case the re nement in Theorem 9 can be strengthened to an equality.
4 Calculating encodings So far, we have assumed only that the abstraction D in an encoding is monotonic. From the normal form theorem (Lemma 1 (a)) we know that an abstraction D can be decomposed into a sequential composition
D = fRag ; [Rd ] (in fact, if D is required to be strict and continuous, then a similar decomposition is possible where Rd is total and image- nite, so [Rd ] is strict and continuous [20]). In fact, if S vD S 00, then it is possible to nd S 0 such that
S # [Rd ] v S 0 and S 0 # fRa g v S 00 (for details, see [11, 20]; S can be taken to be S # [Rd ]). Thus it is reasonable to consider the following two special cases of encoding:
forward data re nement { encoding with a universally disjunctive abstraction, and
backward data re nement { encoding with a universally conjunctive abstraction.
This was noted by Gardiner and Morgan [11] who show that a similar combination of forward and backward data re nement provide a single method that is sound and complete for re nement of abstract data types. In a relational framework forward and backward data re nement must be de ned separately. The more expressive predicate transformer framework permits a single de nition of data re nement, with forward and backward data re nement as special cases. In practical program development, forward data re nement is sucient for most cases. Then the relation R in the abstraction statement can be written as an assignment (we shall see examples of this in Section 7). 19
4.1 Forward data re nement
When the abstraction D is universally disjunctive, encoding can be expressed explicitly. Theorem 10 Assume that S and T are monotonic predicate transformers and that D is universally disjunctive. Then S #D = D ; S ; D Proof
S #D v D ; S ; D
( fproperty (encode2)g D;S vD;S;D;D fproperty of inverses: skip v D ; Dg T
and
D ;S ;D v S#D
fshunting (Lemma 3)g D ; S v (S # D) ; D fproperty (encode1)g T
2
Theorem 10 expresses the encoding S # D explicitly, so it can be calculated. Since a universally disjunctive predicate transformer can always be written as an angelic update, we can rewrite the result as follows: S # D = fRg ; S ; [R?1 ] This shows how our general statement notation allows us to express the least D-re nement of statement S in statement form, when D is universally disjunctive. Thus, encoding reduces to a well-known construction in this special case when the abstraction statement is universally disjunctive (data re nement of S was described as fRg ; S ; [R?1] already in [5]). We now show how Theorem 9 can be strengthened to an equality: Theorem 11 Assume that either D1 or D2 is universally disjunctive. Then S # (D2 ; D1 ) = (S # D1 ) # D2 20
Proof If D1 is universally disjunctive, then (S # D1 ) # D2 v S # (D2 ; D1 ) fTheorems 10 and 5g D2 ; D1 ; S ; D1 v (S # (D2 ; D1)) ; D2 fshuntingg D2 ; D1 ; S v (S # (D2 ; D1 )) ; D2 ; D1 fTheorem 5g T
Similarly, if D1 is universally disjunctive, then (S # D1 ) # D2 v S # (D2 ; D1 ) fTheorem 5, shuntingg S # D1 v D2 ; (S # (D2 ; D1)) ; D2 fTheorem 5, shunting backg D2 ; D1 ; S v S # (D2 ; D1 ) ; D2 ; D1 fTheorem 5g T
Re nement in the other direction was proved in Theorem 9 2 Theorem 11 gives us a basis for decomposing data re nements:
Corollary 12 Assume that the data re nement S vD ;D S 00 is known, where 2
1
either D1 or D2 is universally disjunctive. Then there exists S 0 with
S vD1 S 0 vD2 S 00 (for the proof, choose S 0 to be S # D1 ). In fact, it is easily shown that S # D1 is the smallest solution to the \equation" S vD1 X vD2 S 00 (in Section 5.4 we
shall see that in certain situations this equation also has a greatest solution). Corollary 12 generalises the result mentioned in the introduction to Section 4.
4.2 Calculating encodings
Theorem 10 gives a closed expressions for encodings when the abstraction involved is universally disjunctive. This means that it is possible to calculate the encoding S # D of a program statement S . In a sense, we already have an explicit description (D ; S ; D) of S # D, but if the abstraction D involves a 21
change of state space, then we want to express D ; S ; D (or some re nement of it) purely in terms of statements that do not change the state space. Inverse statements give us a way of calculating the data re nement of any statement S . However, in practice we want to calculate data re nements in a structure-preserving way. In this, we can improve on Theorem 7, using the fact that the decoding D is of the form fRg. As a result, we get rules that are essentially equivalent to traditional rules for data re nement by calculation [3, 17, 18, 20]. We number the rules in the same way as in Theorem 7, but we only state the cases where the assumption that D is universally disjunctive gives us a better result than before. Theorem 13 Assume that D is the universally disjunctive decoding D = fRg. Then (c) magic # D = fdom: Rg ; magic, (d) fpg # D v fp0g im: R?1: p p0, and (e) [p] # D v [p0 ] p0 im: R: p, Proof For (c) we have (magic # D): q: fD is the decoding fRgg (fRg ; magic ; [R?1 ]): q: fde nition of magicg fRg: true: fde nition of angelic updateg (9 R: : ) fde nition of domaing dom: R: fde nitionsg (fdom: Rg ; magic): q: The other two cases follow directly from the corresponding results in Theorem 7. 2 Theorem 13 (c) is an example of a rule that is not structure-preserving (of course, the structure-preserving rule magic # D v magic holds trivially). Similarly, the following rule for the guard statement is valid: [p] # D v fdom: Rg ; [im: R: p]
22
4.3 Rules for relational updates
When the form of the abstraction is known, it is also possible to give rules for encoding relational updates.
Theorem 14 Assume that D is the universally disjunctive abstraction fRg. Then
(a) [P ] # D v [P 0] P 0 Rn(P ; R?1), and (b) fP g # D v fP 0g R ; P P 0 ; R Proof For (a), we have
[P ] # fRg v [P 0] fLemma 25g [P ] ; [R?1 ] v [R?1] ; [P 0] fhomomorphismg [P ; R?1 ] v [R?1 ; P 0] fembeddingg P ; R?1 R?1 ; P 0 fproperty of relational quotientg (R?1 )?1n(P ; R?1) P 0 fproperty of inverse relationg Rn(P ; R?1) P 0
and for (b),
fP g # fRg v fP 0g fLemma 25g fRg ; fP g v fP 0g ; fRg fhomomorphism, embeddingg R ; P P0 ; R 2
Theorem 14 (a) shows that the least structure-preserving encoding of [P ] under fRg is [Rn(P ; R?1)]. On the other hand, (b) does not allow explicit calculation of the encoding for an angelic update. Instead, the rule can be used as a proof rule, to check whether a suggested encoding fP g # D v fP 0g 23
holds. It can also be used for an implicit calculation of the encoding. If we can do a stepwise calculation of the form R;P
P0 ; R
then we have calculated a relation P 0 such that fP g # D v fP 0g holds. The functional update is handled by rewriting according to hf i = [jf j]. Thus the rule is hgi # D v hg0i jgj Rn(jgj ; R?1) In practical program development, a program is often speci ed as a general conjunctive speci cation fpg ; [Q], in terms of a precondition p and a next-state relation Q. The following generalisation of Theorem 14 (c) gives a rule for encoding such a speci cation. Theorem 15 Assume that D is the universally disjunctive abstraction fRg. Then fpg ; [Q] # D v fp0g ; [Q0] (im: R?1: p p0) ^ (Q0 (R ; jpj)n(Q ; R?1)) Proof The proof follows the same idea as the proof of Theorem 14 but it uses a number of algebraic properties of conjunctive speci cations, so we omit it for brevity. 2 A direct consequence of Theorem 15 is the following calculational rule for encoding a conjunctive speci cation: fpg ; [Q] # D v fim: R?1: pg ; [(R ; jpj)n(Q ; R?1 )]
4.4 Functional data re nement
An important special case of data re nement is functional data re nement where the decoding is of the form frg ; hf i. Here, p is a concrete invariant on ? and f : ? ! is an abstraction function which computes the unique abstract state corresponding to any concrete state in r. Functional data re nement was studied already by Hoare [12] in Hoare logic and by Back [1] in the re nement calculus. The structural rules are the same for functional data re nement as for data re nement in general, noting that the inverse of frg ; hf i is [f ?1 ] ; [r] (i.e, a demonic update followed by a guard). Thus we have S # D = frg ; hf i ; S ; [f ?1 ] ; [r] 24
where D = frg ; hf i. Here the assertion frg at the beginning and the guard [r] at the end express the requirement that r is an invariant. It then remains to nd a way of simplifying hf i ; S ; [f ?1]. In fact, we get useful special cases of the rules in Theorems 13 and 14: Theorem 16 Assume that p and r are predicates, f is a function, P is a relation, and D = frg ; hf i. Then (a) fpg # D v frg ; fp0g ; [r], if f ; p p0, (b) [p] # D v frg ; [p0 ] ; [r], if p0 f ; p, (c) [P ] # D = frg ; [jf j ; P ; f ?1 ] ; [r], and (d) fP g # D v frg ; fP 0g ; [r], if jf j ; P P 0 ; jf j. Proof We have
( (
fpg # D v frg ; fp0g ; [r] fde nition of encoding, assumption D = frg ; hf ig frg ; hf i ; fpg ; [f ?1] ; [r] v frg ; fp0g ; [r] fmonotonicity of ;g hf i ; fpg ; [f ?1] v fp0g fde nition of encodingg fpg #hf i v fp0g fTheorems 13 (d)g im: f ?1: p p0 fde nitions of subset, of image, and of f ?1g (8 (9 p: ^ = f: ) ) p0: ) fpredicate calculusg (8 p: (f: ) ) p0 : ) fde nition of composition and subsetg f ; p p0
which proves (a). The proofs of (b) and (d) are similar. For (c) we have [P ] # D = fde nition of encoding, assumption D = frg ; hf ig frg ; hf i ; [P ] ; [f ?1] ; [r] = frewrite hf i = [jf j], homomorphismg frg ; [jf j ; P ; f ?1] ; [r] 25
2
Again, a functional update is handled by rewriting it into a demonic update rst, as hgi = [jgj]. However, if f is bijective, then we get a simpler rule:
Corollary 17 Assume that D = frg ; hf i where f is a bijective function, and let f ?1 (temporarily) stand for the inverse function of f . Then
hgi # D = frg ; hf ; g ; f ?1i ; [r]
4.5 Backward data re nement
In the backward data re nement case (i.e., when the abstraction of a encoding is universally conjunctive), we do not get as strong rules as in the forward case. However, we still get a complete collection of rules that allow us to verify encoding of statements. As before, we only show the cases that improve on the general result in Theorem 7. Theorem 18 Assume that D is the universally conjunctive abstraction D = [R]. Then (a) abort # D = abort, if R is total, (c) magic # D = magic, if R is total, (d) fpg # D v fp0g im: R: p p0, and (e) [p] # D v [p0 ] p0 im: R: p Proof For (c) we assume that R is total. Then magic # D v S 0
fTheorem 5g [R] ; magic v S 0 ; [R] fde nitionsg (8q (8 R: : ) T) ) S 0: ( 0 80 R: 0: 0 ) q: 0): ) fpredicate calculusg (8q S 0: ( 0 80 R: 0: 0 ) q: 0): ) ) fspecialise q := false, simplifyg (8 S 0: ( 0 80 :R: 0 : 0): ) fR assumed totalg 26
(8 S 0: false: ) fde nition of magicg S 0 = magic which shows that magic # D = magic. All the other cases follow directly from the corresponding results in Theorem 7 and from the de nitions of duals, images and inverse images. 2 For the demonic update, the rule is similar to the forward case.
Theorem 19 Assume that D is the universally conjunctive abstraction [R]. Then
[P ] # D v [P 0] P 0 (R ; P )nR?1 Proof
[P ] # fRg v [P 0] fLemma 25g [R] ; [P ] v [P 0] ; [R] fhomomorphismg [R ; P ] v [P 0 ; R] fembeddingg R ; P P0 ; R fproperty of relation quotientg P 0 (R ; P )nR?1
2
However, for the angelic update there seems to be no simple rule; there are no algebraic laws that allow us to manipulate the expression [R] ; fP g v fP 0g ; [R].
5 Data re nement and decoding
Since the encoding S # D was de ned as the least solution to the equation D ; S v X ; D, it makes sense to consider the dual situation also. We refer to greatest solution to the equation D ; X v T ; D as the decoding of T by D, written T " D. Essentially, T " D is the greatest statement that is D-re ned to T . However, this equation does not necessarily have a solution at all (set 27
D = magic and T = abort). Furthermore, even if it has solutions, no greatest
solution need exist. If T " D does exist, then it satis es the following properties:
T " D vD T X vD T ) X v T " D
(decode1) (decode2)
5.1 Conditions for decoding
Before considering in more detail under what conditions the decoding actually exists, let us show its relationship to D-re nement.
Theorem 20 The de nition of T " D can equivalently be expressed by re-
quiring that
S v T " D S vD T holds for arbitrary S . The proof (in Appendix A) is similar to the proof of Theorem 5. It is also interesting to note that from Theorems 5 and 20 we directly nd that ((X X # D); (Y Y " D)) is a Galois connection: Thus, by Lemma 2 we know that this condition can be satis ed only if (X X # D) distributes over arbitrary joins. We have (X X # D): (t i 2 I Si) = (t i 2 I (X X # D): Si) fbeta reductiong (t i 2 I Si) # D = (t i 2 I Si # D) and this was in Theorem 7 shown to hold if D is universally disjunctive. Thus we can summarise: the decoding T " D exists for arbitrary monotonic predicate transformer T whenever the decoding D is universally disjunctive. Then it is de ned by each of the three equivalent conditions of Theorem 20. Theorem 20 indicates that decoding is essentially reverse encoding. We can interpret the decoding S " D as translating the statement S into a more abstract state space, without changing the nondeterminism, termination of strictness of the statement. In Section 7 we will see an example of how decoding can be used to reduce a correctness condition to a more abstract one. 28
Applying the rules f f v id and id v f f to the Galois connection pair ((X X # D); (Y Y " D)) we get the following result which shows that encoding and decoding each undoes the eect of the other, thus giving further justi cation for the naming of these operators: Corollary 21 Assume that abstraction D is universally disjunctive. Then (a) S v (S # D) " D (b) (T " D) # D v T From this and Theorems 6 and 22 (below) we also get the following, more convoluted equalities: ((S # D) " D) # D = S # D ((T " D) # D) " D = T " D
5.2 Decoding and the abstraction category
An interesting duality between encoding and decoding (for universally disjunctive abstraction statement D) is the following. Assume that the data re nement S vD T is given. Recall from Section 3.2 that we have
S vD S # D v T
which factors out the pure data re nement rst, and then the remaining algorithmic re nement. On the other hand, Theorem 20 shows that we can also reverse the order and factor out the algorithmic re nement rst:
S v T " D vD T
This raises the question whether S # D and T " D are related in the abstraction category. We have D ; (S # D) = fTheorem 10g D;D;S;D v fassumption S vD T g D;T ;D;D = fTheorem 24, belowg (T " D) ; D which shows that S # D vD T " D. Figure 4 illustrates all the relationships that can be deduced once S vD T is known, for universally disjunctive D. 29
S
S0 " D
? 6 ? ? D D D? ? ? ? S#D S0 6
Figure 4: Relationships derived from S vD T
5.3 Properties of decoding
The characterisation of decoding in Theorem 20 allow us to deduce homomorphism properties for decoding similar to those of encoding. First, we consider monotonicity:
Theorem 22 Decoding is monotonic in its rst argument: T v T0 ) T "D v T0 "D for universally disjunctive D. Proof
T "D v T0 "D
fGalois connectiong (T " D) # D v T 0 ( fCorollary 21 (b)g T v T0 2
Next, we collect other basic homomorphism properties.
Theorem 23 Assume that D is universally disjunctive. Then (a) abort = abort " D if D is terminating, (b) skip v skip " D, 30
(c) magic = magic " D, (d) fpg v fp0g " D p D: p0, (e) [p] v [p0] " D D : p0 p, (f) (S1 " D) ; (S2 " D) v (S1 ; S2) " D, (g) (u i 2 I Si " D) = (u i 2 I Si) " D, (h) (t i 2 I Si " D) v (t i 2 I Si) " D Proof We prove (d) as an example of how to use the corresponding properties for encoding. The proofs for the other parts are not hard (and they are even simpler if the result in Theorem 24 below is used).
fpg v fp0g " D fGalois connectiong fpg # D v fp0g fTheorem 7 (d)g D: p p0 fcharacterisation of Galois connectiong p D: p0 2
The re nements in Theorem 23 may seem to go the wrong the way, but the intuition can be explained as follows. Decoding is a kind of reverse re nement, so S " D is a less re ned version of S (with respect to the abstraction D). If S 0 and D are known, then nding a statement S such S v S 0 " D means nding some statement S that is D-re ned by S 0 .
5.4 Decoding with inverse statements
Decoding can also be expressed explicitly when the abstraction D is universally disjunctive
Theorem 24 Assume that S and T are monotonic predicate transformers and that D is universally disjunctive. Then
T "D = D ; T ;D 31
Proof We have, for arbitrary S ,
S v T "D
fGalois connectiong S #D v T fpart (a)g D;S;D vT fshunting (Lemma 3)g S vD;T ;D which shows T " D = D ; T ; D. 2
Theorems 10 and 24 and the shunting properties of inverse statements now allow us to characterise forward data re nement in a number of ways: Corollary 25 All the following conditions are equivalent, for monotonic S and S 0 and universally disjunctive D: (a) S # D v S 0 (f) S v S 0 " D (b) D ; S ; D v S 0 (e) S v D ; S 0 ; D (c) D ; S v S 0 ; D (d) S ; D v D ; S 0 The rule for composing encodings has a counterpart for decodings. Theorem 26 Decoding can be composed as follows: (S " D1 ) " D2 = S " (D1 ; D2) Proof (S " D1 ) " D2 = fTheorem 24g D2 ; D1 ; S ; D1 ; D2 = fproperty of inversesg D1 ; D2 ; S ; D1 ; D2 = fTheorem 24g S " (D1 ; D2)
2
In connection with Corollary 12 we stated that if S vD2 ;D1 T where either D1 or D2 is universally disjunctive, then S # D1 is the smallest solution to the equation S vD1 X vD2 T . In fact, if D2 is universally disjunctive then T " D2 is the greatest solution to this same equation (which has a complete lattice of solutions), and we have
S vD1 S # D1 v T " D2 vD2 T 32
5.5 Calculating decodings
When the abstraction D is explicitly given as an angelic update, we can improve slightly on the results in Theorem 23.
Corollary 27 Assume that D is the universally disjunctive abstraction D = fRg. Then (a) abort = abort " D if R is total, and (d) fpg v fp0g " D p im: R?1: p0. For the relational updates, we get duals from the results in Theorem 14.
Theorem 28 Assume that D is the universally disjunctive decoding fRg. Then
(a) [P ] v [P 0] " D P ; R?1 R?1 ; P 0, and (b) fP g v fP 0g " D P R?1n(P 0 ; R) Proof For (a), we have
[P ] v [P 0] " D fGalois connectiong [P ] # D v [P 0] fas in proof of Theorem 14g P ; R?1 R?1 ; P 0 and for (b),
fP g v fP 0g " D fGalois connectiong fP g # D v fP 0g fTheorem 14g R ; P P0 ; R fproperty of relation quotientg P R?1n(P 0 ; R) 2
Again, we do not give a separate rule for functional update; it is handled by rewriting according to hf i = fjf jg. 33
5.6 Decoding with a functional abstraction
Exactly as for encoding, we can try to get simpler special cases when the abstraction D used in an decoding is functional, i.e., when D is of the form frg ; hf i.
Theorem 29 Assume that D is the abstraction D = frg ; hf i. Then (a) fpg v fp0g " D r \ f ; p p0 , (b) [p] v [p0] " D r \ p0 f ; p, and (c) hgi v hg0i " D r g0 ; r ^ f ; g = g0 ; f . Proof For (a) we have
fpg v fp0g " D fproperty of encoding, assumption D = frg ; hf ig frg ; hf i ; fpg v fp0g ; frg ; hf i frewrite as angelic updates, homomorphismg fjrj ; jf j ; jpjg v fjp0j ; jrj ; jf jg fhomomorphismg jrj ; jf j ; jpj jp0j ; jrj ; jf j fde nitionsg (8 r: ^ = f: ^ p: ) p0: ^ r: ^ = f: ) fpredicate calculusg (8 r: ^ p: (f: ) ) p0: ) fde nitionsg r \ f ; p p0
and the proof of (b) is similar. Finally, for (c) we have
hgi v hg0i " D fproperty of encoding, assumption D = frg ; hf ig frg ; hf i ; hgi v hgi ; frg ; hf i frewrite as angelic updates, homomorphisms, de nitionsg (8 0 (9 r: ^ = f: ^ 0 = g: ) ) (9 0 0 = g0: ^ r: 0 ^ 0 = f: 0)) fpredicate calculusg 34
(8 r: ) r: (g: ) ^ g: (f: ) = f: (g0: )) fpredicate calculusg (8 r: ) r: (g: )) ^ (8 r: ) 0 = f: (g0: )) fde nitionsg r g0 ; r ^ f ; g = g0 ; f
2
For the relational updates we do not get simpler rules that those in Theorem 28.
6 Recursive statements Before looking in more detail on how encoding works with recursive statements, let us state the following very useful fusion theorem (attributed to Kleene).
Lemma 30 Assume that f : ! and g : ? ! ? are monotonic functions on complete lattices and that h : ? ! ? is continuous and k : ? ! ? is cocontinuous. Then
(a) h f v g h ) h: (: f ) v : g (b) k f w g k ) h: (: f ) w : g Here a function f is continuous if f: (tA) = (t a 2 A j f: a) for every directed set A (the set A is directed if (8x; y 2 A 9z 2 A x t y v z), i.e., if every pair of elements in A has an upper bound in A). Dually, a function f is cocontinuous if f: (uA) = (u a 2 A j f: a) for every codirected set A (and the set A is codirected if (8x; y 2 A 9z 2 A z v x u y)).
6.1 Encoding recursive statements
Encoding also distributes into the recursive constructs:
Theorem 31 Assume that f : ( 7! ) ! ( 7! ) and g : (? 7! ?) ! (? 7! ?) are monotonic functions that map monotonic predicate transformers to monotonic predicate transformers, and that
(8S f: S # D v g: (S # D)) Then
35
(a) : f # D v : g if D is strict and continuous, and (b) : f # D v : g. Proof For (a) we rst note that continuity of D implies that the function (X X # D) is continuous (the proof for this follows a straightforward pointwise extension argument). Then,
(: f ) # D v : g ( ffusion (Lemma 30 (a))g (X X # D) f v g (X X # D) ( fde nitions, beta conversiong (8S f: S # D v g: (S # D)) The proof of (b) is more direct: (: f ) # D v : g ( f xed-point inductiong (: f ) # D v g: ((: f ) # D) funfold xed pointg f: (: f ) # D v g: ((: f ) # D) ( fspecialisationg (8S f: S # D v g: (S # D))
2
Since any useful programming notation will include facilities for -recursion (or some form of iteration, which is semantically a special case of recursion), the result in Theorem 31 shows that it makes sense to require that abstraction be strict and continuous. For the xpoint constructs, we also get calculational rules:
Theorem 32 Assume that f : ( 7! ) ! ( 7! ) is a monotonic function
that maps monotonic predicate transformers to monotonic predicate transformers, and that D is universally disjunctive. Then
(a) : f # D v ( X f: (X " D) # D), and (b) : f # D v ( X f: (X " D) # D). 36
Proof For (a) we have : f # D v ( X f: (X " D) # D) ( fTheorem 31g (8S f: S # D v f: ((S # D) " D) # D) ( fmonotonicity of encoding (Theorem 6) and of f g (8S S v (S # D) " D) fCorollary 21g T and the derivation for the greatest xpoint is similar. 2
6.2 Iterations and loops
As an illustration to Theorem 32 we show how a weak iteration S is encoded. This construct is de ned as follows: S =^ (X S ; X u skip) and the operational intuition of S is that S is repeated some (demonically chosen) nite number of times. When D is universally disjunctive we have the following derivation: (X S ; X u skip) # D v fTheorem 32 (b)g (X (S ; (X " D) u skip) # D) v fTheorem 7g (X (S # D) ; (X " D # D) u (skip # D)) v fTheorem 7 and Corollary 21g (X (S # D) ; X u skip) so we get S # D v (S # D) This illustrates how Theorem 32 is applied: when the encoding is distributed all the way into the recursion expression, the decoding X " D disappears, by Corollary 21 (the last step of the derivation). A similar derivation can be used to derive the encoding rule for whileloops. First, we use the basic encoding rules to nd that (if b then S1 else S2 ) # D v if b0 then (S1 # D) else (S2 # D) 37
provided that D: b b0 D : b. A derivation like the one for weak iteration then gives (while b do S od) # D v while b0 do (S # D) od provided that D: b b0 D : b, if D is strict and continuous.
6.3 Decoding recursive statements
From the rules for encoding of xpoints we also get rules for decoding.
Theorem 33 Assume that f : ( 7! ) ! ( 7! ) and g : (? 7! ?) ! (? 7! ?) are monotonic functions that map monotonic predicate transformers to monotonic predicate transformers, and that
(8S f: (S " D) v g: S " D) Then
(a) : f v : g " D, if D is strict and continuous, and (b) : f v : g " D. Proof For (a) we have
: f v : g " D
fGalois connectiong (: f ) # D v : g ( fTheorem 31g (8S f: S # D v g: (S # D)) ( fgeneralisation (8 introduction)g f: S # D v g: (S # D) fGalois connectiong f: S v g: (S # D) " D ( fCorollary 21 (a), monotonicityg f: ((S # D) " D) v g: (S # D) " D ( fspecialise S := S # Dg (8S f: (S " D) v g: S " D (8S f: (S " D) v g: S " D) The proof for (b) is similar. 2 38
7 Examples and applications We shall no illustrate the concepts and rules developed in this paper with a number of examples.
7.1 Encoding with context information
The rules above tell us how assertions and guards are transformed in encodings and decodings. However, we are often interested in using assertions and guards as context information, to help calculate the encoding or decoding of an associated statement. For details about how assertions and guards can be introduced, propagated and eliminated from a program text, we refer to [7]. In an encoding transformation of the form S # D v S 0 or S v S 0 " D we call S the source and S 0 the target of the transformation. If the source has the form fpg ; S , then the state information represented by predicate p can be used as an assumption in the calculation of fpg ; S # D. Consider as an example the rule from Theorem 14 (a): [Q] # D v [Q0 ] Q0 Rn(Q ; R?1) where D = fRg. The condition on the right-hand side of this rule can be rewritten as follows: (8 0 R: : ^ Q0: : 0 ) (90 Q: : 0 ^ R: 0: 0)) If the source instead had the form fpg ; [Q], then the rule would have the form fpg ; [Q] # D v [Q0 ] Q0 Rn(jpj ; Q ; R?1) and the condition on the right-hand side could be rewritten as (8 0 p: ^ R: : ^ Q0 : : 0 ) (90 Q: : 0 ^ R: 0: 0)) Note how the context information p has become an added assumption (a conjunct in the antecedent) that can help in the proof. The same argument holds for all other rules: an assertion in the source of an encoding or decoding transformation can be used as an assumption in the proof of the transformation. The use of guards is dual to the use of assertions: a guard statement in the target of a transformation can be used as context information. Consider the same example as for assertions above. If the target has the form [p0];[Q0 ], then the rule becomes [Q] # D v [p] ; [Q0 ] Q0 (jp0j ; R)n(Q ; R?1 ) 39
where the condition on the right-hand side can be rewritten as (8 0 p0 : ^ R: : ^ Q0 : : 0 ) (90 Q: : 0 ^ R: 0: 0)) Again, the context information appears in the antecedent of the condition, i.e., as an assumption.
7.2 Encoding with program variables
The rules for encoding and decoding that we have given so far have reduced reasoning on the predicate transformer level to reasoning on the lower levels in the predicate transformer hierarchy: predicates, functions and relations. These rules are often elegant and algebraic. However, if we want to apply the rules in program derivation, then we need to translate the rules to the level of program variables. It is not our aim to derive a complete collection of such rules, but we shall give an example of how such rules are derived. In practice, what we get is syntactic rules for data re nement, much like those that have been described in detail elsewhere [8, 17, 18]. As an example, consider the rule for encoding a demonic assignment with a functional abstraction (Theorem 16 (c)): [P ] # (frg ; hf i) = frg ; [jf j ; P ; f ?1] ; [r] In assignment notation, we have = = = =
[x := x0 j b] # (fcg ; hx=y := ei frule aboveg fcg ; [(x=y := x j x = e) ; (x := x0 j b) ; (y=x := y j x = e)] ; [c] fmerge assignments and simplifyg fcg ; [(x=y := x0 j b[x := e]) ; (y=x := y j x = e)] ; [c] fmerge assignments and simplifyg fcg ; [y := y0 j b[x; x0 := e; e[y := y0]]] ; [c] frewrite guard as update and mergeg fcg ; [y := y0 j b[x; x0 := e; e[y := y0]] ^ c[y := y0]]
Thus we get the following rule for encoding in terms of program variables: [x := x0 j b] # (fcg ; hx=y := ei) = fcg ; [y := y0 j b[x; x0 := e; e0] ^ c0 ] where e0 abbreviates e[y := y0] and c0 abbreviates c[y := y0]. 40
A similar derivation gives the following rule for forward data re nement when the abstraction is of the form fx=y := x j cg where c is a boolean expression relating the abstract variable x and the concrete variable y: [x := x0 j b] # fx=y := x j cg v f9x cg ; [y := y0 j (8x c ) (9x0 c0 ^ b))] where c0 abbreviates c[x; y := x0; y0]. In this case the derivation is as follows: [x := x0 j b] # fx=y := x j cg = fencoding rule (Theorem 15)g fim: (y=x := y j c)?1: trueg ; [((x=y := x j c) ; jtruej)n((x := x0 j b) ; (y=x := y j c)]) = fmerge assignments and simplifyg f9x c ^ Tg ; [(x=y := x j c)n(y=x := y j (9x0 b ^ c[x := x0 ]]) = fquotient with assignments, simplifyg f9x cg ; [y := y0 j (8x c ) (9x0 b ^ c[x; y := x0 ; y0]))] As before, we have used the rules from Appendix B to manipulate assignments.
7.3 Example: block re nement
Consider a block re nement of the form begin var x := e ; S end v begin var y := e0 ; S 0 end A standard method for proving such a re nement is by simulation, which in our framework amounts to nding an abstraction statement D such that the following three conditions hold:
hx= := ei v hy= := e0 i ; D (initialisation simulation ) D ; S v S0 ; D (body simulation ) D ; h =xi v h =yi ( nalisation simulation ) The block re nement can then be illustrated by a subcommuting diagram, as in Figure 5. The results of this paper show that the body simulation condition is equivalent to S # D v S 0. The initialisation and nalisation conditions can also be simpli ed when the abstraction statement is an (angelic or demonic) assignment, as the following example shows. We show some details of a simple block re nement, to illustrate data re nement with program variables. Consider the block begin var x := ; ; : : : ; hx := x [ fz gi ; : : : end 41
S
? 66 hx= := ei ? ?? ? D D @ @ hy= := e0@i @ @R
S0
-
66@@ h =xi @@ @R D D ? ? ?h =xi ? - ?
Figure 5: Simulation between blocks where the ellipses stand for statements that we do not consider further. The set (x) is to be implemented by an array (variables n and y), according to the abstraction statement D = hx=n; y := arrset: n: yi where arrset: n: y stands for the set fa j 9i < n y[i] = ag (we assume that y is a potentially in nite array, indexed by the natural numbers). This abstraction statement can be justi ed if elements are never removed from the set. First we use the rule derived in Section 7.2 to calculate the data re nement of the assignment hx := x [ fzgi. We have hx := x [ fagi # D = frewrite deterministic assignment as demonic assignmentg [x := x0 j x0 = x [ fzg] # hx=n; y := arrset: n: yi = frule from Section 7.2g [n; y := n0; y0 j arrset: n0: y0 = arrset: n: y [ fzg] v fstandard re nement derivationg hn; y[n] := n + 1; zi Note that we do not need to manipulate assignments that replace variables; this is taken care of by the rule for encoding. Thus, the resulting derivation follows the same outline as traditional data re nement derivations in the calculational style [17, 5]. Other statements inside the block body can be encoded by means of similar calculations. 42
The block initialisation is calculated by noting that
hx= := ei v hy= := e0 i ; D hx= := ei ; D v hy= := e0i We have
hx= := ;i ; D = frewrite assignment, rule for inversesg [x= := x j x = ;] ; [n; y=x := n; y j x = arrset: n: y] = fhomomorphism, merge assignments (see Appendix B)g [n; y= := n; y j arrset: n: y = ;] v fmake assignment deterministicg hn; y= := 0; arbi where arb stands for some arbitrary element (it does not matter how y is initialised). A similar calculation shows that
h =xi ; D v h =yi and we have the re nement begin var x := ; ; : : : ; hx := x [ fz gi ; : : : end v begin var n; y := 0; arb ; : : : ; hn; y[n] := n + 1; zi ; : : : end
7.4 Least data re nement
The following example illustrates the dierence between the least structurepreserving data re nement and the all-over least data re nement. It also shows the calculation of an encoding using the de nition. We consider the assignment a := a + 1 under abstraction fa=c := a j c = a div 2g. We have = = = =
ha := a + 1i # fa=c := a j c = a div 2g fTheorem 10g fa=c := a j c = a div 2g ; ha := a + 1i ; [c=a := c j c = a div 2] frewrite demonic update as a deterministic statementg fa=c := a j c = a div 2g ; ha := a + 1i ; hc=a := a div 2i fmerge assignmentsg fa=c := a j c = a div 2g ; hc=a := (a + 1) div 2i frewrite deterministic statement as angelic updateg fa=c := a j c = a div 2g ; fc=a := c j c = (a + 1) div 2g 43
= fhomomorphismg f(a=c := a j c = a div 2) ; (c=a := c j c = (a + 1) div 2)g = fmerge assignmentsg fc := c0 j 9a c = a div 2 ^ c0 = (a + 1) div 2g = farithmetic, predicate calculusg fc := c0 j 9a (a = 2c ^ c0 = c) _ (a = 2c + 1 ^ c0 = c + 1)g = fpredicate calculusg fc := c0 j c0 = c _ c0 = c + 1g = frewrite angelic update as angelic choiceg skip t hc := c + 1i Here we have used the rules in Appendix B to manipulate assignments. Thus the least data re nement of a := a + 1 is an angelic choice between skip and c := c + 1. If we apply Theorem 14 (b) (in its syntactic form as given in Section 7.2) then we get ha := a + 1i #fa=c := a j c = a div 2g v fsyntactic rule for encodingg [c := c0 j (8a c = a div 2 ) (9a0 c0 = a0 div 2 ^ a0 = a + 1))] = fpredicate calculusg [c := c0 j (8a c = a div 2 ) c0 = (a + 1) div 2)] = farithmetic, predicate calculusg [c := c0 j (8a (a = 2c ) c0 = (a + 1) div 2) ^ (a = 2c + 1 ) c0 = (a + 1) div 2))] = fpredicate calculusg [c := c0 j c0 = (2c + 1) div 2 ^ c0 = (2c + 2) div 2] = farithmeticg [c := c0 j F] = fde nition of magicg magic
which shows that the least structure-preserving data re nement (and in fact, the least conjunctive data re nement) is magic.
7.5 Abstracting properties
As an application of decoding, we shall now show how a correctness property can be translated to a more abstract level. 44
Theorem 34 Assume that D is a universally disjunctive abstraction. Then p fj S " D jg D: q ) D: p fj S jg q Proof
D: p fj S jg q
fde nition of correctnessg D: p S: q ( fproperty of inverses, monotonicityg D: p (D ; D ; S ; D ; D): q fdecoding in terms of inversesg D: p D: ((S " D): (D: q)) ( fmonotonicityg p (S " D): (D: q) fde nition of correctnessg p fj S " D jg D: q 2 Theorem 34 gives us the following rule for reducing a correctness condition using decoding:
p0 D: p
p fj S jg q q D: q p0 fj S 0 jg q0
S v S0 " D
The idea is that we are free to choose an abstraction D that suits the situation. As a small example, consider the correctness condition true fj if x < y then x; y := y; x else x := y jg (x y ) (*)
Since the exact values of x and y are not important, it should be possible to reduce this to a correctness condition where the unnecessary details about x and y are abstracted away. Recall from Section 2.3 that the conditional statement in (*) can be written as
S 0 = fx < yg ; hx; y := y; xi t fx yg ; hx := yi 45
Next, we introduce the abstraction D = hb=x; y := (x y)i and start calculating the decoding. For the rst assertion we have (x < y) = farithmeticg :(x y) = fpredicate calculusg (:b)[b := (x y)] = fassignment property (Appendix B)g true \ (b=x; y := (x y )) ; (:b) so f:bg v fx < yg " D by Theorem 29 (a). For the rst assignment we then have (with context assumption x < y) (x; y := y; x) ; (b=x; y := (x y)) = fmerge assignmentsg (b=x; y := (y x)) = fassumption x < yg (b=x; y := T) = fsplit assignmentg (b=x; y := (x y) ; (b := T) so hb := Ti v ([x < y] ; hx; y := y; xi) " D by Theorem 29 (c) (where the guard statement is context information). For the second assertion we similarly nd fbg v fx yg " D and then skip v [x y ] ; hx := y i " D (where the guard statement is again context information). Thus we have
f:bg ; hb := Ti t fbg ; skip v S 0 " D by Theorem 23 (f) and (g). Furthermore, we nd true D: true and also D: (x y) = b. Thus by the rule derived from Theorem 34 we have reduced the correctness condition (*) to the following: true fj if :b then b := T else skip jg b
which is then easy to verify. This illustrates how a correctness condition can be reduced to a nite-state level, where automatic veri cation veri cation methods can be used. 46
8 Conclusion We have de ned an encoding and a decoding operator on predicate transformers and investigated their place in the predicate transformer hierarchy. In particular, we investigated the algebraic properties of encoding and decoding and their application to data re nement. The investigation has led to proliferation of similar and linked results. In order to see the structure, we can consider the results in the light of three independent dichotomies: 1. Encoding vs decoding: Encoding translates an abstract program into a concrete one, in the sense that an encoding step can be included in a program derivation. The dual operation is decoding, which translates a concrete program into a more abstract one. 2. Semantics vs syntax: The rules for encoding and decoding are rst developed on a semantic level, as properties of predicate transformers that are reduced to properties of the predicates, functions and relations that the predicate transformers are built from. From these rules we then derive syntactic rules that allow us to apply encoding and decoding to programs that are described in terms of program variables and assignments to them. 3. Forward vs backward data re nement At the abstract level, encoding is de ned as a single concept, using the notion of a data re nement with a general abstraction statement. However, in order to derive rules that reduce encoding to the lower levels in the predicate transformer hierarchy there was a need to separate forward data re nement (disjunctive abstraction statements) and backward data re nement (conjunctive abstraction statements). The idea of calculating data re nements is not new, but most existing work has concentrated on the syntactic level, with rules that explicitly talk about the program variables and the boolean expressions involved [1, 10, 18]. Hoare, He and Sanders [14] make use of the weakest prespeci cation [13] and a dual strongest postspeci cation to calculate data re nements at an algebraic level. However, they work in a relational framework, which means that they do not model nontermination properly, and they have to handle forward and backward data re nement separately. Traditionally, theories and methods of data re nement have only considered the standard (forward) notion of data re nement. Gardiner and Morgan were the rst to handle forward and backward data re nement in a single rule [11], but they did not consider the calculation of data re nement in this 47
general framework. As noted above, the problem of nding the most general data re nement (often referred to as the weakest simulation) has also been handled extensively in a relational framework [19], but the dual problem (which we solve by decoding) does not have a general solution when programs are modeled as relations. Our use of inverse statements in data re nement goes back to older work [3, 5, 20], but the combination of all the three dichotomies mentioned above has to our knowledge not been considered before.
A Additional proofs Proof of Theorem 7
The proofs of the dierent parts of this theorem all follow the same general pattern, making use of the speci c algebraic properties of the statement in question. For (a) we assume that D is strict. Then abort # D v abort
fTheorem 5g D ; abort v abort ; D fD strict implies D ; abort = D, general rule abort ; D = abortg abort v abort fre nement is re exiveg T
which shows that abort # D = abort. For (b) we have skip # D v skip
fproperty (wrap)g D ; skip v skip ; D fskip is unitg DvD fre nement is re exiveg T
For (c) we assume that D is strict and terminating. We then have for arbitrary S : magic # D v S
fproperty (wrap)g 48
D ; magic v S ; D fD terminating implies D ; magic = magicg magic v S ; D fD is strictg S = magic which shows that magic # D = magic. For (d) we have
fpg # D v fp0g fproperty (wrap)g D ; fpg v fp0 g ; D fde nitionsg (8q D: (p \ q) p0 \ D: q) fmutual implicationg (8q D: (p \ q) p0 \ D: q) ) fspecialise q := pg D: p p0 \ D: p flattice propertyg D: p p0 (8q D: (p \ q) p0 \ D: q) ( fgeneral rule S: (p \ q) S: p \ S: qg (8q D: p \ D: q p0 \ D: q) ( fmonotonicityg D: p p0 D: p p0 Next, for (e) when D is universally disjunctive:
[p] # D v [p0] fproperty (wrap)g D ; [p] v [p0] ; D fde nitionsg (8q D: (:p [ q) :p0 [ D: q) fD assumed disjunctiveg (8q D: (:p) [ D: q :p0 [ D: q) fmutual implicationg 49
(8q D: :p [ D: q :p0 [ D: q) ) fspecialise q := falseg D: (:p) [ D: false :p0 [ D: false fD assumed strictg D: (:p) :p0 fantimonotonicity of negation, de nition of dualg D : p p0 (8q D: (:p) [ D: q :p0 [ D: q) ( fmonotonicityg D: (:p) :p0 fantimonotonicity of negation, de nition of dualg D : p p0 D: p p0
Now consider (e). Before the main proof we note the following for arbitrary conjunctive S :
S: (:p [ q) :S: p [ S: q
fshunting ruleg S: p \ S: (:p [ q) S: q fS conjunctiveg S: (p \ (:p [ q)) S: q flattice propertiesg S: (p \ q) S: q fS monotonicg T
Now assume that D is universally conjunctive. Then we have [p] # D v [p0 ] fproperty (wrap)g D ; [p] v [p0] ; D fde nitionsg (8q D: (:p [ q) :p0 [ D: q) fmutual implicationg 50
(8q D: (:p [ q) :p0 [ D: q) ) fspecialise q := pg D: true :p0 [ D: p) fD terminating implies D: true = true, shuntingg p0 D: p (8q D: (:p [ q) :p0 [ D: q) ( fderivation above, D assumed conjunctiveg (8q :D: p [ D: q :p0 [ D: q) ( fmonotonicityg :D: p :p0 fproperty of complementg p0 D: p p0 D: p
For (f) we have (S1 ; S2 ) # D v (S1 # D) ; (S2 # D) ( fproperty (encode2)g D ; S1 ; S2 v (S1 # D) ; (S2 # D) ; D ( fproperty (encode1)g D ; S1 ; S2 v (S1 # D) ; D ; S2 ( fmonotonicityg D ; S1 v (S1 # D) ; D fproperty (encode1)g T
Now, for (g) we have (u i 2 I Si ) # D v (u i 2 I Si # D) ( fproperty (encode2)g D ; (u i 2 I Si) v (u i 2 I Si # D) ; D fdistributivityg D ; (u i 2 I Si) v (u i 2 I (Si # D) ; D) ( fD monotonic implies D ; (u i 2 I Si) v (u i 2 I Si ; D)g (u i 2 I D ; Si) v (t i 2 I (Si # D) ; D) fproperty (encode1)g T
Finally, for (h) we have (t i 2 I Si ) # D v (t i 2 I Si # D) 51
( fproperty (encode2)g D ; (t i 2 I Si) v (t i 2 I Si # D) ; D fdistributivity; D universally disjunctiveg (t i 2 I D ; Si) v (t i 2 I (Si # D) ; D) fproperty (encode1)g T
Proof of Theorem 8 For (a) we have
S # abort v abort
fTheorem 5g abort ; S v abort ; abort fproperties of abortg T
Next, for (b) we have
S # skip v S
fTheorem 5g skip ; S v S ; skip fskip is unitg T
and
S v S # skip
fskip is unitg skip ; S v (S # skip) ; skip fproperty (encode1)g T
For (c) we have
S # magic v S 0
fTheorem 5g magic ; S v S 0 ; magic fde nitionsg 52
(8q true S 0: true) fchaos is the least of all terminating predicate transformersg
chaos v S 0 which shows that S # magic = chaos. Finally, for (d) we have
S # (D1 t D2 ) v (S # D1 ) t (S # D2 )
fTheorem 5g (D1 t D2) ; S v ((S # D1) t (S # D2)) ; (D1 t D2 ) fdistributivityg D1 ; S t D2 ; S v (S # D1) ; D1 t (S # D1) ; D2 t (S # D2 ) ; D1 t (S # D2) ; D2 ( fmonotonicity of joing D1 ; S t D2 ; S v (S # D1) ; D1 t (S # D2) ; D2 fproperty (encode1)g T
Proof of Theorem 20
First, assume that conditions (decode1) and (decode2) hold. Then
S v T "D
) fmonotonicityg D ; S v D ; T "D ) fproperty (decode1), transitivityg D;S vT ;D
and (decode2) with X := S gives implication in the opposite direction. Now assume that S v T " D S vD T for arbitrary S . Then D ; (T " D) v T ; D fassumption with S := T " Dg
T "D v T "D
fre nement is re exiveg and
T
D;X vT ;D
fassumption with S := X g X v T "D 53
B Manipulating assignments The basic rules for merging and splitting assignments are essentially syntactic versions of the de nitions of basic operators on relations and of functions. For the ordinary relational assignment we have the following merge/split rules, one for the simple case and one for the general case: (x := x0 j b) ; (x := x0 j c) = (x := x0 j (9x00 b[x0 := x00 ] ^ c[x := x00 ])) (y=x := y0 j b) ; (z=y := z0 j c) = (z=x := z0 j (9y0 b ^ c[y := y0])) In addition to relational composition, relational quotient will be used in a number of rules. Thus, we need rules for working with quotients in the assignment notation as well: (x := x0 j b)n(x := x0 j c) = (x := x0 j (8x00 b[x0 := x00 ] ) c[x := x00 ])) (y=x := y0 j b)n(z=y := z0 j c) = (z=x := z0 j (8y0 b ) c[y := y0])) For functional assignments, we get the following special cases of the merge/split rules: (x := d) ; (x := e) = (x := e[x := d]) (y=x := d) ; (z=y := e) = (z=x := e[y := d]) All these rules require that all assignments involved have the same target (the target of an assignment is is the collection of variables to which values are assigned). We can use the following frame rules to make the targets of two assignments equal. (x := x0 j b) = (x; z := x0 ; z0 j b ^ z0 = z) (y=x := y0 j b) = (y; z=x; z := y0; z0 j b ^ z0 = z ) (x := e) = (x; z := e; z) (y=x := e) = (y; z=x; z := e; z) By using the merge/split rules twice in succession, we can reorder assignments. For example, the following commutativity rule can be proved using merge/split and frame rules: (y=x := y0 j d) ; (z=w := z0 j e) = (z=w := z0 j e) ; (y=x := y0 j d) provided that w does not occur free in d and x does not occur free in e. Finally, we note the following rule for evaluation of an expression after an assignment to a variable: (x := d) ; e = e[x := d] Note that an expression is really a function that maps states to values; thus it can be composed with an assignment (which is a function that maps states to states). 54
References [1] R.J. Back. Correctness Preserving Program Re nements: Proof Theory and Applications, volume 131 of Mathematical Centre Tracts. Mathematical Centre, Amsterdam, 1980. [2] R.J. Back. A calculus of re nements for program derivations. Acta Informatica, 25:593{624, 1988. [3] R.J. Back. Changing data representation in the re nement calculus. In 21st Hawaii International Conference on System Sciences, January 1989. [4] R.J. Back, J. Grundy, and J. von Wright. Structured calculational proof. Formal Aspects of Computing, 9:469{483, 1997. [5] R.J. Back and J. von Wright. Re nement calculus, part I: Sequential programs. In REX Workshop for Re nement of Distributed Systems, volume 430 of Lecture Notes in Computer Science, Nijmegen, the Netherlands, 1989. Springer-Verlag. [6] R.J. Back and J. von Wright. Statement inversion and strongest postcondition. Science of Computer Programming, 20:223{251, 1993. [7] R.J. Back and J. von Wright. Re nement Calculus: A Systematic Introduction. Springer-Verlag, 1998. [8] W. Chen and J.T. Udding. Towards a calculus of data re nement. In Mathematics of Program Construction, volume 375 of Lecture Notes in Computer Science, Groningen, the Netherlands, June 1989. SpringerVerlag. [9] E.W. Dijkstra. A Discipline of Programming. Prentice-Hall International, 1976. [10] P.H. Gardiner and C.C. Morgan. Data re nement of predicate transformers. Theoretical Computer Science, 87(1):143{162, 1991. [11] P.H. Gardiner and C.C. Morgan. A single complete rule for data re nement. Formal Aspects of Computing, 5(4):367{383, 1993. [12] C.A.R. Hoare. Proofs of correctness of data representation. Acta Informatica, 1(4):271{281, 1972. 55
[13] C.A.R. Hoare and J. He. The weakest prespeci cation. Information Processing Letters, 24:127{132, 1987. [14] C.A.R. Hoare, J. He, and J.W. Sanders. Prespeci cation in data re nement. Information Processing Letters, 25:71{76, 1987. [15] He Jifeng and C.A.R. Hoare. Data re nement in a categorical setting. Techn. Rep. PRG 90, Oxford University Computing Laboratory, November 1990. [16] C.C. Morgan. The speci cation statement. ACM Transactions on Programming Languages and Systems, 10(3):403{419, July 1988. [17] C.C. Morgan and P.H. Gardiner. Data re nement by calculation. Acta Informatica, 27:481{503, 1990. [18] J.M. Morris. Laws of data re nement. Acta Informatica, 26:287{308, 1989. [19] W-P. de Roever and K. Engelhardt. Data Re nement: Model-Oriented Proof Methods and their Comparison. Cambridge Tracts in Theoretical Computer Science 47. Cambridge University Press, 1998. [20] J. von Wright. The lattice of data re nement. Acta Informatica, 31:105{ 135, 1994.
56
Turku Centre for Computer Science Lemminkaisenkatu 14 FIN-20520 Turku Finland http://www.tucs.abo.
University of Turku Department of Mathematical Sciences
Abo Akademi University Department of Computer Science Institute for Advanced Management Systems Research
Turku School of Economics and Business Administration Institute of Information Systems Science