Generating and Model Checking a Hierarchy of

0 downloads 0 Views 293KB Size Report
Mar 2, 1999 - abstract model of the circuit, which can use symbolic representation of data. .... itself, and thus all model checking questions can be answered by examining finite prefixes. .... events in the 'future' do not affect events in the past. ..... of the development process for debugging — if a verification fails, information ...
Generating and Model Checking a Hierarchy of Abstract Models Scott Hazelhurst Programme for Highly Dependable Systems Department of Computer Science University of the Witwatersrand, Johannesburg 2050 Wits, South Africa [email protected]

2 March 1999

Abstract The use of automatic model checking algorithms to verify detailed gate or switch level designs of circuits is very attractive because the method is automatic and such models can accurately capture detailed functional, timing, and even subtle electrical behaviour of circuits. The use of binary decision diagrams has extended by orders of magnitude the size of circuits that can be so verified, but there are still very significant limitations due to the computational complexity of the problem. Verifying abstract versions of the model is attractive to reduce computational costs but this poses the problem of how to build abstractions easily without losing the accuracy of the low-level model. This paper proposes a method of bridging the gap between detailed designs and abstract models, and presents preliminary theoretical and experimental results. Starting with a detailed, low-level model of a circuit (which uses BDDs for representing data), the human verifier picks a set of properties that s/he believes characterises the circuit. These properties are automatically verified by a model checking algorithm. Once proved, these properties are used to synthesise automatically an abstract model of the circuit, which can use symbolic representation of data. An automatic model checking algorithm can then be applied to the abstract model to verify more properties of the circuit. The preliminary results, although limited, were promising: the abstraction could be constructed and verified with relatively modest cost, thereby reducing the human cost of verification significantly.

1 Introduction 1.1 Motivation The development of efficient model checking algorithms (such as [6, 25]) based on binary decision diagrams (BDDs) has meant that automatic verification methods are now applicable to a wide range of circuits. However, although the use of BDDs has extended the size of circuits that can be verified by orders of magnitude, they do not change the underlying computational complexity costs of model checking, and at the same time have their own limitations. The last few years have seen considerable work being put into extending model checking algorithms to deal with these limitations. One tack has been to use methods such as abstraction and compositionality, often combined with the use of theorem proving [8, 13, 17, 20, 21, 23]. This has been very effective, but has the drawback that significant human intervention is often needed, or datapath or timing information is abstracted away (which is not always appropriate). Another tack has been extending the range of data structures used in the model checking process, and the use of more advanced decision diagrams has been very effective (for examples, see [5, 7]). The difficulty with this is that often the starting point for verification is a design given at the gate or switch level, for which the natural data representation to use is BDDs. Although work is being done at being able to generate more abstract models directly from circuit descriptions, this is not always possible, particularly when it is important to capture electrical or detailed timing properties of the circuit. Thus 1

a starting point of this paper is that verifying circuits which have models that are naturally represented using BDDs (or equivalent data structures) will remain an important problem to be solved.

1.2 Research Problem This paper proposes a method of abstraction, thereby bridging the gap between the low-level model extracted automatically from a circuit description and higher level methods of data representation. The focus of the abstraction mechanism is to identify structure in the circuit, which allows more powerful data representation methods to be used. The basic approach is as follows. Suppose that we have some model which we wish to verify, and that is naturally represented at a very detailed level using data structures such as BDDs.

M

M

 The human verifier selects a set of properties  ; : : : ; n which s/he believes are the important 1

    

properties that characterise the circuit. Each property i is verified automatically using a model checking algorithm. A model A is generated automatically from the properties 1 ; : : : ; n . A is an abstraction in the following sense: every property that is true of of A is true of , and every property that are neither true nor that is false of A is false of . (There will be some properties of false of A .) Each i and any properties inferable from the i will be true of A . The key here is that A has a higher-level representation. While BDDs may still be used to represent boolean data, other data structures will be used for data such as integers and vectors. The human verifier uses an automatic model checking algorithm on A to verify more properties. In principle, the process of abstracting could be carried out at a number of different levels.

M

M

M

M

f M

M

M

M

gM M

M

M

The idea of constructing a model from a set of properties is a common one in the study of logic (especially, the study of logic in computer science). The contribution of this paper is to show how such construction can be done for circuit models. The significance of this is that it provides a practical way of creating abstraction with relatively minimal human intervention. This abstraction in turn can be used for practical verification. In a similar technical setting to the one presented here, Zhu and Seger also proposed a method of model construction from a set of assertions [27]. However, not only is the model construction very different to the one presented here, but the purpose of the construction was different. Their goal was to explore the completeness of an inference system based on a compositional theory of trajectory evaluation [17], whereas the goal of this paper is present a practical method of circuit verification.

1.3 Overview of paper Section 2 describes the technical setting of the model checking algorithm used, introducing the method of symbolic trajectory evaluation. Section 3 presents a method of constructing an abstract model from a set of properties proved about a model. This abstract model is shown to be sound with respect to this set of properties and to be the weakest model that could be generated from the set of properties. Section 4 shows how this theory can be implemented in practice for a restricted version of the logic. Section 5 presents some experimental data, and Section 6 concludes.

2 Background The model checking algorithm used is symbolic trajectory evaluation (STE), originally proposed by Bryant and Seger [2, 25] and later extended by Hazelhurst and Seger [14, 16]. It is an approach that has had much success in verifying a variety of circuits (e.g. [3, 9]), and it has an associated compositional theory which has extended its use (e.g. [1, 19]). Section 2.1 discusses STE’s novel method of representing state. Section 2.2 defines a simple temporal logic, the language used for expressing properties of interest. Section 2.3 presents the method of 2

symbolic trajectory evaluation, and Section 2.4 outlines how the compositional theory complements the model checking algorithm. As only an outline of the theory can be presented here, the interested or skeptical reader will have to look elsewhere for a full presentation [18, 25].

2.1 State space representation The novel feature of STE is its method of representing the state space of a model using a complete lattice: the state space is a set of states, together with a partial order on states. The partial order is an information ordering: the higher up in the lattice the more that is known about the state of the system. A state is an abstraction of states above it in the state space, so if s is less than s1 and s2 it contains the common features of s1 and s2 and suppresses their differences. The computational advantage of this approach is that given the right logical framework, proving that a property holds of a state s implies that the property also holds of all states above s in the information order. For circuit state spaces, this has a very natural realisation. A circuit is modelled as a set of nodes, each of which can have one of four values: a high voltage (H), a low voltage (L), unknown or ‘don’t care’ voltage (U), and an inconsistent or overconstrained voltage (Z). Let = U; L; H; Z , and define a by U x, x Z for all x . is a complete lattice. If a circuit has n nodes, the state partial order space is n , which in turn is a complete lattice (the partial order extends to tuples). We shall want to t0 t1 t2 : : : examine sequences of states – the partial order extends to sequences naturally: s0 s1 s2 : : : exactly when for all i, si ti . The behaviour of the circuit is modelled by a next state function . Accurate models (including timing) of gate and switch-level circuits can be extracted automatically (e.g. from VHDL descriptions). ; ) where is a complete lattice under the order , and : Formally, the model is ( ; is a monotonic function. In analysing the behaviour of such a model, we will be interested in sequences of states, in particular those sequences of states that are compatible with the next state function. Such sequences are called trajectories.

C

v

v

v

C f

2C C

v

v

v

Y

hS v i Y

Definition 2.1 If 

g

= s0 s1 s2 : : :

S

Y S!S

v

is a sequence, it is a trajectory if for all i,

Y(si) v si

+1

.

The sequences are infinite. In principle, if there is a finite state space, the sequence must start repeating itself, and thus all model checking questions can be answered by examining finite prefixes. Practice is easier – all the sequences we shall deal with have only a finite number of non-bottom elements so it is easy to determine the finite prefix that must be analysed.

2.2 The temporal logic TL Before introducing the temporal logic used to express properties of the model, the question of what it means for a property to be true of a lattice state space must be discussed. For technical and modelling = ; ; ; is used to express truth values. True and false are reasons, the four-valued logic represented by and ; represents neither true nor false; and represents an inconsistent truth value both-true-and-false. There are natural definitions for operations such as conjunction and negation. is a bilattice: there are two partial orders defined on it as shown in Figure 1. The partial order represents an information ordering (on the truth domain), and the partial order represents a truth is used for comparing states and the symbol is used to compare truth ordering. (Note, the symbol values). A particular advantage that provides here is that enables us to distinguish between a property being false of a model or an abstraction of a model, and the abstraction being too weak to prove a property. A full description of and its properties is beyond the scope of this paper. General information can be found in the literature [4, 11, 12], and the motivation for its use in STE can be found elsewhere [14, 18]. A temporal logic, TL, is used to describe properties of the model structures as the specification language. A predicate is a function mapping from the state space, , to the truth space . The base of the logic is the set, G, of simple predicates. Simple predicates allow primitive properties to be expressed — the technical definition can be found in [14], which also shows that restricting the base of the logic

t



Q

f?

f? f t >g

>

Q

v





Q

Q

S

3

Q

6 ?>@ ? @t f  @ ? @??  Figure 1: The bilattice

Q

to G does not reduce the expressiveness of the logic. The set of TL formulas is given by the following abstract syntax: TL ::= G TL TL TL Next TL TL Until TL. The Next and Until operators are the temporal operators that allow us to express how behaviour must change over time. Each formula describes some property; we wish to know whether a formula is true of a sequence or set of sequences. The semantics of formulas is given by a satisfaction relation, Sat, that takes a formula and a sequence of states and returns a truth value (in ). The formal definition of the satisfaction relation is given in Definition 2.2. ! . We use the following conventions: i = si , i = si si+1 : : : , j i = Let  = s0 s1 s2 : : : si si+1 : : : sj . A sequence gives the state of the model at a set of time instants, with si being the state of the model at time i. Informally, a sequence satisfies a predicate if the predicate is true of the zeroth element in the sequence (i.e. a predicate by itself asks a question about time 0). A sequence satisfies the conjunction of two formulas if it satisfies both formulas. A sequence satisfies the negation of a formula g if it is false of g. A sequence satisfies Next g if g is true of the sequence from time 1 (i.e. if g is true of the sequence with the zeroth element chopped off). A sequence  satisfies g Until h if there is a j such that g is true of all suffixes of  starting from times 0 to j 1 and h is true of the suffix of  starting at time j .

j ^

j: j

j

Q

2S

Definition 2.2 (Semantics of TL)

1.

? If g 2 G then Sat(; g ) = g (s ). 0

^ h) = Sat(; g) ^ Sat(; h) Sat(; : g ) = :Sat(; g )

2. Sat(; g 3.

4. Sat(; Next g )

=

5. Sat(; g Until h)

^

:

Sat(1 ; g )

1

i?1

= i=0 _ ((j^=0 Sat(j ; g)) ^ Sat(i ; h)))

^

:

Q

Here and are the syntactic operators of the logic TL, whereas and (light typeface) are operators. Other operators can be defined as convenient shorthands. One shorthand is the bounded global t operator During [] defined by: During [(f0 ; t0 ); : : : ; (fn ; tn )] g = nj=0 ( kj=fj Nextk g ). Informally, this formula asks whether it is the case that from time f0 through t0 , f1 through t1 , : : : , and from fn through tn that g is true.

^

^

2.3 Symbolic Trajectory Evaluation Symbolic trajectory evaluation is a model checking algorithm that compares characterising sequences of formulas to determine satisfaction. Recall that trajectories are sequences of states that are compatible with the next state function (Definition 2.1). Suppose that g and h are TL formulas (we can think of g — the antecedent — as the ‘input’ to the circuit and ‘h’ — the consequent — as the output of the circuit). The style of verification that STE supports is: Do all trajectories that satisfy g also satisfy h? This type of verification condition is called an assertion, is denoted by g == h and formally defined by:

 g==h iff for all trajectories ,  satisfies g (t  Sat(; (g))) implies that  satisfies h

Definition 2.3 ( Sat(; (h))).

t

4

An important class of formulas – called trajectory formulas – have the important property that they have unique minimum sequences that satisfy them. So if g is a trajectory formula, there is a unique sequence g such that any other sequence, , satisfies g exactly when g . g is known as a defining sequence. Similarly trajectory formulas have unique minimum trajectories that satisfy them. If g is a trajectory formula this minimum sequence is called the defining trajectory, and denoted  g . Using this property leads to an efficient verification algorithm for trajectory formulas.

v

Theorem 2.1 (Seger and Bryant)

g==h precisely when g v  h .

This theorem forms the heart of STE: given an assertion, the relevant defining sequence and trajectory are computed and compared. The less specified in the antecedent, the higher the level of abstraction and the more ‘don’t know’ values are used in trajectory evaluation. Not only is this computationally effective, it is easy for the human verifier to use. However, STE, as all other automatic model checking algorithms, has computational limitations, due to the cost of model checking. To overcome these limitations, a compositional theory has been developed, the details of which can be found in [14, 17]. The theory that supports trajectory formulas can be generalised to the full logic, TL. The details of this are presented in Appendix A. Trajectory evaluation and its associated compositional theory has been implemented in the VossProver verification tool. Underlying the VossProver, is the Voss system of Seger [24]. The three key features of Voss are:

  

A symbolic simulator which provides good models of gate-level and switch-level circuits; An efficient implementation of ordered binary decision diagrams (BDDs); A lazy functional language called FL (a dialect of ML), which acts as the interface to the verification system.

These have been extended with the compositional theory. Full details can be found in [18].

2.4 Compositional theory of STE As with other model checking algorithms, there are computational limitations on what STE can efficiently model check. To overcome these limitations, a compositional theory was developed for STE [15], and was successfully used in a number of example circuits. The cost of STE is particularly sensitive to the complexity of the property to be proved and relatively insensitive to the size of the circuit. The motivation of the compositional theory is that one could overcome the computational limitations by using STE to prove simple properties and then use the compositional theory to infer the complex results that the human would want proved. STE can be used to verify low-level properties and the human verifier concentrates on gluing these results together at a higher-level, where human insight can be productively used. Although this approach was successful, there is no doubt that the effective use of the compositional theory requires a relatively high-level of expertise in the compositional theory and the verification tool, as well as an understanding of the circuit to be proved. Moreover, in choosing which local properties to verify one must have a detailed understanding of the global constraints and the relationshop between the global and local constraints (timing in particular). The motivation of the work presented in this paper is to harness the power of the compositional theory while reducing considerably the amount of human intervention required. Essentially we wish to automate – at least partially – the compositional theory. The next section lays down the theoretical framework for doing this, and Section 4 shows how this is put into practice.

5

3 Theory of Model Synthesis This section presents the theory for constructing a model from a set of assertions. Only the theory for assertions involving TL formulas that are trajectory formulas is given here. The full theory is given in Appendix A. Recall that trajectory formulas are formulas that have unique defining sequences: if g is a trajectory formula, then there exists a unique sequence  g such that any sequence that satisfies g is at least as big as g . Note that such a defining sequence depends only on the state space and the formula, not on the next state function.

3.1 Generalising the model

hS v i

Let the lattice state space, ; , represent the state space of the circuit. Rather than modelling be: , it is convenient to generalise the represenhaviour of the circuit with a next state function !. : ! tation of the circuit behaviour by representing the behaviour by a behaviour function The idea is that given a sequence of states s0 s1 s2 : : : sk that the circuit has gone through, the next state of the circuit, sk+1 , is at least partially determined not only by state sk but by all the states before it. In general, it may also be the case that other states besides sk+1 are partially determined by the s0 ; : : : sk . Let  = s0 s1 s2 : : : . There are three requirements on behaviour functions:

  

Monotonicity: If  1

Y S!S

Y S !S

v  , then Y( ) v Y( ); 2

1

2

Y(Y()) = Y(). If  0 = Y ( ) and  00 = Y (s : : : sk UU : : : ) then k0  = k00 . This ensures that

Fix-point property:

Causality: 0 events in the ‘future’ do not affect events in the past.

0

0

The fix-point property is not strictly necessary but it captures a type of deductive closure. Note that a next state function is just a special case of a behaviour function: in this case, if  0 = ( ), then  and 0 are identical except for 10 , which depends on 0 . This is illustrated in the simple example below. This definition of a behaviour function leads to the generalisation of the definition of a trajectory. A -trajectory is a sequence that is compatible with the behaviour function.

Y

Y

Definition 3.1 A sequence  is a

Y-trajectory if, for all i; j , Y(ij ) v j

Informally, this definition says that for all i and j , the value of i+1 is consistent with the application of the next state behaviour function to the i0 . Given this revised definition of a trajectory, the definition of an assertion remains the same. Example Consider the simple circuit shown in Figure 3.1. The state of this circuit at any instant can be represented by a 6-tuple, the voltages at A,B ,C ,D ,E ,G. The obvious1 next state function for this is:

Y(v ; v ; v ; v ; v ; v ) = (U; U; U; :v ; v _ v ; v ^ v ^ v ): Note that the environment, not Y , is in control of the inputs. 1

2

3

4

5

6

def

2

1

4

2

3

5

Y

Simple example 1: This definition can be turned into an equivalent behaviour function: 0 ( ) =  0 , where 10 = 1 (0 ), and i0 = i , where i = 1. Sequences that are a trajectories with respect to the next state function are also trajectories with respect to the behaviour function 0 . All next state functions have behaviour functions, but not all behaviour functions have equivalent next state functions.

tY

Y

1

6

Y

As we shall see later, there are other – some sensible, some not – possible definitions.

6

def

A

e

B

E D G

C

Figure 2: Simple circuit example Simple example 2:

We can also define the following behaviour function.

Y(s s s s s s : : : ) = s s0 s s0 s s : : : ; 0 1 2 3 4 5

where if s0 = (v1 ; v2 ; v3 ; v4 ; v5 ; v6 ), s1 and s3 = (y1 ; y2 ; y3 ; y4 ; y5 ; y6 ), then:

def

0 1 2 3 4 5

= (w1 ; w2 ; w3 ; w4 ; w5 ; w6 ), and s2 = (x1 ; x2 ; x3 ; x4 ; x5 ; x6 ),

s01 = (w1 ; w2 ; w3 ; :v2 t w4 ; w5 ; w6 ) s03 = (y1 ; y2 ; y3 ; y4 ; y5 ; x2 ^ x3 ^ (:v2 _ w1 ) t y6 ) This behaviour function doesn’t completely describe the circuit — it is consistent with the next state function described earlier, but it doesn’t contain all the information the state function has. Behaviour functions allow us to ‘remember’ things that happened in the past. Applying the behaviour function Given a behaviour function, Y , we can describe the circuit behaviour by generating a sequence of states which informally can be thought of as a run of circuit. In the sequence  , j describes the state of the circuit at time j . To apply the behaviour function, we might start with some initial sequence of states that only contains information about the state in the first few time instants (i.e. all other states in the sequence would be to  to get more information, and we update  appropriately. Since bottom values). We now apply has the fix-point property we shall not get any more information by re-applying to  itself. But by applying to the suffix of  starting at time 1 we can get more information. We can iterate this process further down  . It is important a number of times: each step getting more information by applying to realise that in one step we use more that one state in the sequence to infer new information and get information about more than one instant in time. The algorithm below outlines the process. (The notation Ui 0 represents the sequence of i bottom values U prefixed to the sequence 0 .)

Y

Y

Y

Y

Y

i := 0; repeat

 := (UiY(i )) t ; i := i + 1;

Y

This is a little simplistic: in principle the application at time i might result in a new sequence to which we could now apply at time i 1 to get more information, and so may need to reset i to 0. This may

Y

?

7

seem counter-intuitive at first. However, the properties of monotonicity and causality ensure we do not get strange behaviour. We tend to think of the application of a next state function as stepping forward in time, simulating the behaviour of the circuit. Rather think of the process of applying the behaviour function as refining the information in  . The end result of applying the behaviour function repeatedly is the sequence of the states which the circuit goes through – in itself the process of applying the behaviour function does not mimic the circuit behaviour. In practice we do not have to move ‘backwards’ at any stage as we are able to use the structure of the behaviour function to sweep forwards consistently.

3.2 Abstractions The focus in the following sections is on building abstractions. This section defines what is meant by abstraction. First, one behaviour function is an abstraction of another if it is weaker than it.

Y ; Y : S ! ! S ! are behaviour functions, then Y Y () v Y ().

Definition 3.2 If 2 -trajectories  ,

Y

A model

M

1

1

2

1

M j

is an abstraction of

Y

2

if for all

2

is an abstraction of model

Definition 3.3 Let tion of 2 if =M1

1

M

if every assertion true of

2

M

1

is also true of

M = (hS ; v i; Y ) and M = (hS ; v i; Y ) be two models. M hg==hi implies that j=M2 hg==hi 1

def

1

2

def

2

M. 2

1

is an abstrac-

These two concepts of abstraction are tied together.

M = (hS ; v i; Y ) and M = (hS ; v i; Y ) be two models. If Y M is an abstraction of M .

Theorem 3.1 Let tion of 2 , then

Y

Proof. (1) (2) (3) (4) (5) (6) (7)

j

1

def

1

2

1

def

2

1

is an abstrac-

2

h i

Suppose =M1 g == h Let  be a 2 trajectory satisfying g For all i; j , 2 (ij ) j (2), definition of -trajectory (3), 1 is an abstraction of 2 . For all i; j , 1 (ij ) j  is a 1 trajectory (4)  satisfies h (1), (2), (5) =M2 g== h (2), (6)

Y

j

Y Y Y

v v

Y

Y

Y

h i

3.3 The synthesis of a model In this discussion, we shall assume we are given a circuit which is modelled with a fixed lattice state . What we shall focus on is the behaviour function used to model the behaviour of the space, ; circuit over time – a number of possible behaviour functions are possible. The most important behaviour function is the concrete next state function, C , extracted directly from the circuit description. C is a very accurate model of the circuit behaviour, describing functionality and timing and even some subtle electrical behaviour precisely. A key issue is that the behaviour is modelled at the bit-level, and the appropriate data structures to use for this are BDDs, with all the strengths and weaknesses of BDDs. The goal is to build an abstraction of the model. def The verifier first considers the model ; C ), and then selects a set of assertions to C = ( ;

hS v i

Y

Y

hS v i Y be checked based upon an understanding of the circuit. Let A = fj=MC hgi ==hii: i = 1; : : : ; ng be M

def

that set of assertions. These are verified directly using symbolic trajectory evaluation. The next step is to construct a new behaviour function, , that is more abstract than C , and the model ( ; ; ) can now be used for verification. By the property of abstraction, if =M g == h , then =MC g == h . Moreover, because of the lattice space model structure and the use of as a truth domain, if it is not is too abstract to the case that =M g == h it is possible to distinguish two cases: (1) the function determine whether the property holds of ; (2) the property does not hold of . C C Here we assume

Y

j

h i

Y

fj h  i Q

M

M

8

j

Y

hS v i Y h i

that each gi and hi are trajectory formulas, so for each gi there is a unique weakest sequence  gi that satisfies it, and similarly for each hi . (The general theory is shown in Section A.) The abstraction reduces computational costs by suppressing unnecessary detailed behaviour of the circuit and identifying structure in the circuit. Rather than the model describing how individual nodes affect other individual nodes, the assertions typically describe how groups of nodes in the circuit affect other groups. The new abstracted model can be represented more compactly – symbolic data structures other than BDDs can be used to reason about the model’s behaviour.

M

How the abstraction is computed. The goal is to construct a model, , in which each assertion in is true. Moreover, we wish this model to be the weakest model in which this holds in the following sense: if =M g == h , then for any model, 0 , for which =M gi == hi (i = 1; : : : ; n), it is the case that =M g == h . Any -trajectory that satisfies gi should satisfy hi . This can be ensured if whenever  is a sequence ()), and this can be satisfying gi (i.e. gi  ) it is also the case that ( ) satisfies hi (i.e.  hi ensured by defining ( ) = y ( ), where

A

j h i j h i

M

j

0

h

 i

0

Y

Y

v

Y

vY

y() def = tfhi : i = 1; : : : ; n; gi v g t :

(1)

The function is monotonic because of the monotonicity of the join operation. The commutativity and associativity of join in the lattice makes this definition oblivious of the ordering of the gi and hi . What the function y doesn’t take into account is the interplay between the various assertions, which means that y does not have the fix point property in general. For example, suppose that  g1  ( satisfies g1 ). By our definition of y , y ( ) satisfies both g1 and h1 . But now it could well be that while  does not satisfy g2, y() does. We need to take this into account, otherwise our model will not fully take into account all the information given by the assertions. What we have to do is repeatedly apply y to  until we reach stability — this will happen since there are a finite number of assertions and join is idempotent (it also follows from general properties of monotonicity in lattices). Thus, the definition of is revised to:

v

Y

Y() = Z:Z (y()): def

Y

A

(2)

M

By the construction of , each assertion in will be true of the abstract model ; moreover by STE’s compositional theory, any assertion inferable from will also hold. This means that the constructed model has deductive power.

A

Soundness of the model

M

M M

M

to reason about must be an abstraction of To use C, C . By Theorem 3.1, it is enough to show that is an abstraction of C . The proofs show that is an abstraction of any behaviour function that satisfies all the assertions =M gi == hi .

Y

Y

j h

Y

 i

M be fixed, y and Y be as defined above in Equations 1 and 2: Lemma 3.2 Suppose M = (hS ; v i; Y ) is a model such that j=M1 hgi ==hii, i = 1; : : : ; n. If  is a Y -trajectory, then y ( ) v Y ( ). In the discussion below, let 1

1

Proof. (1) (2) (3) (4)

1

1

y() = tfhi : i = 1; : : : ; n; gi v g t   v Y1 () gi v Y1 () ) hi v Y1 () tfhi : gi v Y1 ()g t  v Y1() 9

Definition of y Monotonicity of

j=M1 hgi ==hii

Y

1

(2), (3), definition of join

f i :  i v g  f i :  i v Y ()g (2) h g h g i i i i tf :  v g v t f :  v Y ()g (5) tfhi : gi v g t  v t fhi : gi v Y ()g t  (6) y() v Y () (1), (4), (7) Lemma 3.3 Suppose M = (hS ; v i; Y ) is a model such that j=M1 hgi ==hii, i = 1; : : : ; n. If  is a Y -trajectory, then y n ( )) v Y ( ). (5) (6) (7) (8)

1

1

1

1

1

1

1

1

(1) (2) (3) (4) (5) (6) (7) (8)

Y -trajectory. y () v Y () Suppose y n ( ) v Y ( ) yn () = y(yn ()) v y(Y ()) Y (Y ()) = Y () y(Y ()) v Y (Y ()) yn ()) v Y ()

Let  be a 0

1

Y

Induction base. True since 1 monotonic. Induction step (3), Monotonicity of y Fix-point property Lemma 3.2, 1 ( ) a 1 -trajectory (4–6) (2),(3),(7)

1

1

+1

1

1

1

1

1 +1

1

Y

1

1

Induction follows

Y

M = (hS ; v i; Y ) is a model such that j=M1 hgi==hii, i = 1; : : : ; n, then M is an M

Theorem 3.4 If abstraction of 1 .

1

1

Proof. (1) Let  be a 1 -trajectory. n 0; yn () 1 () Lemma 3.3 (2) () (  ) (2), Equation 2 (3) 1 (4) is an abstraction of 1 (1), Theorem 3.1 is an abstraction of 1 (4) (5) Theorem 3.4 not only shows that any results we infer using the constructed model are true of the model C (so our results are sound), but that is the weakest model for which all the given assertions hold. The next section turns to how this theory can be placed into practice.

Y 8  vY Y vY Y Y

M M

M

M

M

4 Algorithm for Model synthesis and trajectory evaluation Section 3 discussed the theory of model synthesis. This section presents work on an algorithm for the construction of a model from a set of assertions, and then model checking the model constructed. It is not a complete algorithm, in the sense that the synthesised model is not the weakest model consistent with the set of given assertions (however, safety is assured). Work needs to be done on making the algorithm more powerful and improving its efficiency. A complete description of the algorithm is outside the scope of this paper, but the outline described below should give a good flavour of how it works. At this stage an important restriction is made. The Until operator is not supported. For many examples in hardware this is not a restriction at all since where timing is important the until operator does not come into play. From a practical point of view, we treat the bounded global operator as the basic t temporal operator of the logic. Recall that: During [(f0 ; t0 ); : : : ; (fn ; tn )] g = nj=0 ( kj=fj Nextk g ). The model for the concrete circuit is extracted by the Voss system from the circuit description. If there are n state-holding components (nodes) in the circuit the state space for the circuit is n where each node is represented by one component of the state tuple.

^

^

C

Outline of section: Section 4.1 discusses the representation of the abstract model. Section 4.2 outlines the na¨ıve trajectory evaluation algorithm for the abstract model, and Section 4.3 raises pragmatic points regarding the implementation of the algorithm.

10

4.1 Model synthesis 4.1.1

Semi-Canonical form

A key step in the algorithm that implements the theory is translating TL formulas to semi-canonical form. Each formula g is converted into a semantically equivalent formula of the form

^nk

=1

During [(s0 ; t0 ); : : :

; (srk ; trk )] gk ;

where ti < si+1 for all i. It is semi-canonical form since the translation routine cannot guarantee that two semantically equivalent formulas get translated into identical syntactic forms, but the translation routine does a good job. The semi-canonical form of a formula is used as the implicit representation of the formula’s defining sequence. 4.1.2

The new model

Given a set of assertions the set of tuples

fj= hgi==hii : i = 1; : : : ; ng. The data structure that represents the model is A = fmi : i = 1; : : : ; ng

where

mi = hgi ==hi ; ti ; hc i i where hc i is the semi-canonical form of hi . ti is the first time that the assertion j= hgi ==hii could possibly be useful. Initially ti is set to 0 and then is changed during trajectory evaluation. How this model representation is used is described in the next section. (In practice other information is represented to make using the model more efficient.)

4.2 Trajectory evaluation of the synthesised model

M j h i

constructed as above and formulas g and h. We wish to use trajectory evaluation We are given a model to determine whether =M g == h . Let gn and gc be the nodes mentioned in g and the semi-canonical form (i.e. the representation of g ’s defining sequence). From this the defining trajectory of g is computed. Initially, set g = g . A basic outline of the algorithm is shown in Figure 3. The function TrajectoryEvaluate is the main function. Initially i is 0. We call the ApplyY function to the suffix of  starting from index i – the function returns a sequence that describes the effect of applying at this point. If there is no effect (i.e. change = U) then we increment i to see if we can apply at the next point in the sequence. If there is a change, we update  by joining it with change and set i back to 0 (since we may now be able to apply in a different way). The loop is guaranteed to terminate because (1) for reasons discussed earlier we need only examine finite prefixes of sequences and (2) since the state space is finite an individual element in a sequence can only be changed a finite number of times. The ApplyY function describes a single application of the behaviour function. Essentially it checks for each assertion in the model whether the defining sequence of the current suffix of the trajectory that has been evaluated is less than the defining sequence of the antecedent of the assertion. It continues this process until a fixed-point is reached (i.e. this is an implementation of Equation 2).

Y

Y

Y

4.3 Pragmatic Consideration The algorithm above is obviously na¨ıve and would not be efficient to implement directly. The practical implementation of the algorithm draws on the compositional theory mentioned earlier, and in particular on three rules: the rules of transitivity, specialisation and time-shift.

11

function ApplyY ( ) more := false ; change := U; repeat for j := 1 to n do if   gj then change end; end; until not more ; return change end;

v

:= change t hj ;

function TrajectoryEvaluate ( ) i := 0; repeat result := ApplyY (i ); if change = U then i := i + 1; else change := Ui result ;  :=  change i := 0; end until end detected ;

t

Figure 3: Pseudo-code for STE 4.3.1

Specialisation Rule

A specialisation is a function mapping variables that appear in TL formulas to expressions over these variable. Given a specialisation we can extend it to be a function mapping from TL formulas to TL formulas using the structure of formulas. The rule of specialisation says that if  is a specialisation, and g == h holds, then so does  (g )== h . As a simple example, we might prove the following of an adder

h i

h

 i

h[A] = x ^[B ] = y==Next [C ] = x + yi Using specialisation we can use this result to infer: Specialised Assertion

Specialisation Used

h[A] = 3 ^[B ] = 4==Next [C ] = 7i fx 7! 3; y 7! 4g ^ h[A] = u  v [B ] = 2  y==Next [C ] = u  v + 2  yi fx ! 7 u  v; y 7! 2  yg

4.3.2

Transitivity Rules

h

 i

h

 i

The transitivity rules says that if g1 == h1 and g2 == h2 then corresponds to the rule of transitivity of propositional logic). 4.3.3

hg ==h i if g2 v g1 t h1 . (This 1

Time-shift Rule

This rule says that if

hg==hi, then for all n  0, hNextng==Nextnhi 12

2

function STE (g )(h) We wish to discover whether

 := g ;

g==h

repeat foreach mi = gi == hi ; ti ; hc i do if ; t ti  Nextt  (g ) then

h  9  3 v

end end until stability return   h ;

i

 :=  t (hc i ) ti := t + 1;

v

Figure 4: A more practical algorithm for STE 4.3.4

Using these rules

The algorithm shown in Figure 4 outlines a more practical algorithm. The algorithm relies on the fact that the semi-canonical form of a formula described above is a way of representing the formula’s defining sequence, and conversely that a defining sequence of a formula can be converted into semi-canonical form. For each assertion, we use the ti component of the record of the assertion in the model to indicate the smallest possible time-shift which we could use for that assertion (initially it is 0). Suppose we wish to discover whether g == h. The algorithm first computes the defining sequence of g (which, recall can be represented in semi-canonical form).  is initially set to this defining sequence. We then check to see whether we can combine  and any of the assertions in by using the rules of transitivity, time-shift and specialisation. This involves checking for each assertion m in whether there is a t > 0 and a specialisation  so that by specialising and time-shifting m we can combine  and the shifted, specialised m using transitivity. Finding these time-shifts and specialisations is the most difficult and computationally expensive part of this algorithm. This process continues until no more assertions can be applied. Once we have found a specialisation and shift, we can compute the effect of applying the behaviour function by updating  . We also record (by updating the ti ) the time-shift used so that in a subsequent iteration of the loop we do not find the same specialisation and time-shift. The soundness of this algorithm is guaranteed by the compositional theory of trajectory evaluation presented elsewhere. However, it is not as powerful as the theory presented in Section 3. The following factors must be borne in mind.



A

 

A

Comparing formulas/sequences is hard. A key reason for representing formulas in semi-canonical form is to facilitate the comparison of formulas and sequences. The routine that does this comparison is not complete, although previous work has showed that it robust for a range of problems. The algorithm call a routine that determines whether there exist appropriate specialisations and shifts. The algorithm that finds such a specialisation is a heuristic – it is not guaranteed to find specialisations, though if the heuristic does find a specialisation the specialisation will be correct.

This algorithm too can be improved significantly. For example, when we find a specialisation and timeshift it is often the case that it applies to many time-shifts. By determining what they are and applying the algorithm to all these time-shifts we can perform many steps in one. There are other simple improvements that can be made. This algorithm has been implemented in the VossProver. These routines are all written in FL. A human verifier can prove properties using the ‘primitive’ STE rule in the VossProver or manually apply

13

a b c



CIN

qq q e q

e q

e

S

e COUT

Figure 5: Base Module for Multiplier the compositional theory of STE. In addition, the verifier can construct abstract models from a set of assertions and call routines which do STE on the abstract model. Debugging facilities are also provided.

5 Experimental evidence The method presented in Section 4 has been implemented in the VossProver tool, and applied to the verification of Benchmarks 17 and 21 of the IFIP WG10.4 hardware verification benchmark suite. Section 5.1 describes the circuit of Benchmark 17, an integer multiplier. Section 5.2 describes the verification of this circuit, and Section 5.3 presents and analyses the experimental results. Section 5.4 describes Benchmark 17, a systolic filter, with Sections 5.5 and and 5.6 discussing its proof and the analysis of the proof. The final sub-section, Section 5.7, shows how the methodology presented in this paper can be used as part of the development process for debugging — if a verification fails, information can be presented to the human verifier which helps find the problems.

5.1 Description of Benchmark 17 Benchmark 17 of the IFIP WG10.4 suite is an integer multiplier that takes two n-bit numbers and returns a 2n bit number returning their multiplication. Let A = an?1 : : : a1 a0 and B = bn?1 : : : b1 b0 be two n-bit numbers. Then

AB =

nX ?1 i=0

2i (

nX ?1 j =0

2j ai bj ):

Implementing this is straightforward: the basic operation is multiplying one bit of A with one bit of B and adding this to the partial sum. The component that accomplishes this basic operation takes four inputs: a, one bit of the multiplicand; b, one bit of the multiplier; c, one bit of the partial sum previously computed; and CIN, a one bit carry from the partial sum previously computed; and computes a b + c + CIN producing two outputs: S , a one bit partial sum, and COUT, a one bit carry. The equations for the output (c CIN) and COUT = a b c a b CIN c CIN. Figure 5 shows the are: S = a b implementation of the equations (as given in the IFIP documentation). A vector of these components multiplies one bit of B with the whole of A and adds in any partial answer already computed. An n-bit multiplier contains n of these stages. We can consider the vector of S outputs as one n-bit number and the vector of COUT outputs as another n-bit number. If we consider ~, if the b inputs are all the bit y, and if the vector of c inputs stage k by itself, if the vector of a inputs is x is z~, then we shall have that S + 2k+1 COUT = x ~y + z . (This is something that must be proved in the



^  

^ ^ _ ^ ^

14

_ ^

verification.) The S and COUT outputs are not added together after each stage to reduce the number of gates and propagation delays of the circuit. These components are arranged in a grid (Figure 6 shows how a 4 bit multiplier is arranged). The multiplier contains n stages, each of which multiplies one bit of B with A and adds it to the partial result computed so far. After k stages, n + k bits of the partial answer have been computed. Having passed through n stages, the full multiplication has been computed. However, as the final stage still outputs two numbers, the carries must now all be added in. Therefore the final step in the multiplier is a row of n 1 full adders (labelled FA in the figure) that adds in carries.

?

A

? BM

6B[0]

S0 COUT0

-

? BM

6B[1]

S1 COUT1

-

? BM

S2 COUT2

6B[2]

-

? BM

S3 COUT3

6B[3]

-

-

S4 FA

Figure 6: Graphical view of Multiplier

The implementation of the circuit was done in Voss’s EXE format as a detailed gate-level description of the circuit. A unit-delay model was used, although this is essential neither to the implementation nor the verification. Full details of the implementation can be found in [22] or the benchmark suite documentation2 .

5.2 Proof of correctness of benchmark 17 The general method of proof proposed in the paper is: 1. The verifier picks a set of properties to be proved of the circuit; 2. These are verified automatically using trajectory evaluation; 3. An abstract model is then constructed automatically; 4. Properties are then proved of the abstract (and hence by Theorem 3.4 of the underlying model) automatically using trajectory evaluation. The last two steps may be repeated as often as is necessary. This method was applied directly to this circuit. The rest of this sub-section gives an outline of the proof — the details may be found in Appendix B.

?

Basic properties to be proved. The following property was proved of each stage, i = 0; : : : ; n 1, where we assume that the bit-width of A and B is e and that the base module in Figure 5 takes 3ns to compute its output once the inputs are stable.

 If from time 0 to time 3: (1) A has the value a; (2) the i-th bit of B has the value b[i]; (3) Si? c; and (4) and COUTi?1 has d.  Then at time 3 the sum of the Si and COUTi outputs is a  b[i] + c + 2i d

1

has

In TL terms, this property is written (as produced by the VossProver) as:

j= hDuring [(0; 3)] (A = a ^ B [i] = b ^ Si? = c[(e + i ? 1)::0]) ^ Ci? = d[(e + i ? 1)::0]) : == During [(3; 3)] Si + 2i Ci = c[(e + i ? 1)::0] + 2i ab[i]i +1

2

1

1

http://goethe.ira.uka.de/benchmarks/

15

Bit width 8 16 32 64

STE 8 29 146 1025

Comp 18 45 179 1065

Abs MC 15 48 191 1263

New 28 81 334 2567

Table 1: Verification costs for Benchmark 17 Multiplier in seconds (Note we need to be careful about exactly which bits if c and d are being referred to.) The final basic property proved of the circuit is that the final set of adders adds their inputs together (taking into account the shifts): If from time 0 to time 2e, node Se?1 is c and COUTe?1 is d, then at time 2e, Se has the value c + d. More formally, = During [(0; 2e)] (Se?1 = c Ce?1 = d)== During [(2e; 2e)] Se = c + 2e d .

^

j h



i

Construction of model. Once the basic properties have been proved, an abstract model is automatically synthesised from these properties using a library call in the VossProver. Note that in order to prove the basic assertions, the VossProver translates the assertions so that the data is represented using BDDs. However, once the assertions have been proved correct, they are represented symbolically using more abstract data structures than BDDs. Thus, the higher-level STE algorithm is not limited by the power of BDDs. Proof of final result.

The final result we wish to prove is:

If from times 0 to m A has the value has the value ab.

a and B has the value b, then from time 6e to m, Se

Here m is determined partly by the specification of the problem, and the length of time needed to compute the result. This result is verified directly by using STE on the abstract model constructed in the previous step.

5.3 Analysis of proof of benchmark 17 The two criteria for evaluating the method is the computational cost of the proof, and the human cost. Table 1 shows the computational cost of verifying the IFIP WG10.4 Benchmark 17 multiplier for different bit-widths – the bit-widths shown are the size of the inputs, the output is twice this. All runs were done on a Pentium II, 300MHz with 128M of RAM running Linux. At most 30MB of RAM were used. The size of the circuit is quadratic in the bit-width, and so the state space is clearly exponential in the bit-width. The table shows the cost for the new method proposed in this paper (the ‘new method’) and the cost of verifying the benchmark using the manual application of a compositional theory for STE as reported in [18] (the ‘old method’).

   

STE: is the cost of the basic trajectory evaluation that proves the basic assertions about the circuit behaviour (this cost is the same for the old and new methods); Comp: is the total cost of the old method, including STE – as can be seen, the computational cost is small. Abs MC: is the cost of doing the abstraction and model checking the abstract model (the timing of this component was difficult to measure accurately: the error range is approximately 10%). New: is the total cost of the new method, including STE.

16

The computational cost of the new approach is much worse than the manual application of the compositional theory. Even, so in the worst case the cost of the abstraction and model checking the abstraction is approximately half the total computational cost. Thus, it can be seen from the computational costs, that the method proposed in this paper is feasible. Bear in mind that at this stage, the implementation of the algorithm presented in the previous section is na¨ıve, and much work can be done in improving the algorithm and data structures used. (All the execution in column Abs MC is done in FL, an interpreted, functional language, the data structures of which do not lend themselves to very efficient implementations of the algorithms.) What is gained with this extra computational cost is examined next Human cost of proof. On this example, the new method proved to be successful, with the increased computational cost being well worth the reduced human cost. Human insight is needed to understand what the important properties of the circuit are, but no expertise is needed in deciding how to combine the properties such as is needed in many compositional approaches to the problem. Since model construction and trajectory evaluation are automatic, the method is much easier to use than the manual application of a compositional theory. One unexpected advantage of the new method was dealing with timing. With manual application of the compositional approach, great care needs to be taken with timing since the input signals to the different stages are stable for different lengths of time. Even with the heuristic that the VossProver provides, this is an annoying level of detail which is essential to get right when composing different results. The new approach reduces the difficulty dramatically. In verifying the basic properties we need only take into account local timing properties (e.g. it takes 3ns for each stage to produce an answer once its inputs are stable). The global timing properties automatically fall out of the process of performing trajectory evaluation on the abstract model.

5.4 Description of Benchmark 21 Benchmark 21 of the IFIP WG10.5 suite is based on Kung’s design of a systolic array that computes convolution. Given k weights, w1 ,: : : ,wk , and n inputs x1 ,: : : ,xn , it computes the sequence y1 ,: : : ,yn+1?k where yi = w1 xi + w2 xi+1 + : : : + wk xi+k?1 . For performance reasons, the circuit is implemented as a parallel architecture, using a systolic array of cells, one cell for each weight. Each cell contains a multiplier and an adder. The overall architecture is shown in Figure 7 (in our examples, we assume an array of four cells, and an input bit width of 4).





ResultOut

-

Cell

-

StreamIn

Control



??6@@

Cell

 -

??@6@

Cell

??6@@

-

Cell

??6@@

Figure 7: Architecture of one dimensional array Input to the circuit is StreamIn, which is b bits wide, and output is ResultOut, which is 2b + k +1 bits wide. The Control lines contain two clocks (clock 2 is the inverse of clock 1), a reset line and a number of other control lines.

17

In the first k clock cycles the weights are input to the circuit. In the next 2n clock cycles the data is input. The control lines are set appropriately in these periods so the cells know what to do with the input data. Data passes into the circuit from left to right, and as results are computed in the circuit they move from right to left. Examining the implementation of each cell, shown in Figure 8, this means that StreamOut of cell i is connected to StreamIn of cell i +1 and ResultOut of cell i is connected to ResultIn of cell i 1. In each clock cycle, the cell takes the new input x of StreamIn, the weight w stored in the cell, the partial result y on ResultIn and computes wx + y , putting out the result on ResultOut. The circuit implementation is given as a detailed gate-level design, essentially a direct translation of the VHDL code into EXE format.

?

b

clk2 Reset N ResultIn[12:0]

b

StoreRes clk1 StoreWgt SelectWgtStr StreamIn[3:0]

b

REG

b b

REG

DMUX REG

StoreStr

b

b

REG

@@ @ @ ??? ?

@@ResultOut[11:0] @ @+ ??? ?

StreamOut[3:0]

Figure 8: Implementation of a cell

5.5 Verification of Benchmark 21 The verification of the circuit follows its structure.

   5.5.1

First each of the cells is verified; A new abstraction is created from the proofs of the individual cells; The final results are proved. The proofs of the individual cells

The heart of each cell is a Benchmark 17 multiplier, and so the verification largely follows the description given in Section 5.2. A cell is more than a multiplier, so there is one more basic property that needs to be proved — the adder and the flip-flops behave as expected. Thus the basic properties of each cell are that each stage of the multiplier and the adder with flip-flops work correctly. A concrete example can be found in Appendix C.1. For each cell, the verification procedure is:

  

Prove the basic properties; Build an abstract model from those basic properties; Prove the overall property by model checking the abstract model.

This shows why abstraction is so useful. We can build abstractions easily for the particular property we wish to prove, thereby gaining computational advantage.

18

5.5.2

Overall proof

Once each of the cells is verified, there are two more basic properties of the filter that must still be proved:

 

If the weights are input at the appropriate times, the correct cell stores that weight in one of its flip-flops; If the data is input at the appropriate times, the cells get those inputs at the correct time.

Thus, we can characterise a filter with 6 basic properties (the four cell-proofs and the two just mentioned above). An example is given in Appendix C.2. Once these properties have been proved, a new abstract model can be built. This new abstract model can now be model checked to prove the final result.















 

Figure 9: Overview of proof

This whole process is shown graphically in Figure 9. In the figure, the models are shown as ovals, and properties as squares. At the bottom is the most concrete model of the circuit. Using this model, we verify the most basic properties: for each cell there are the six basic properties, plus two other properties describing the input values (shown above the concrete model). For each cell, an abstract model is automatically created, and verified. Using the verification of each cell plus the two basic properties the most abstract model is created and then verified. Appendix C gives some more details of the verification.

5.6 Analysis of proof The computational cost of the verification of Benchmark 21 is given in Table 5.6. The columns in the table are:

     

Bit-width: the size of the input bit-width. If the input bit-width is b, the output bit-width is 2b + 4; L0 STE: the cost of doing STE on the underlying concrete model (the level 0 model); Abstraction: The cost of making the abstract models used; L1 STE: The total cost of doing STE on the abstract models of the cells (the level 1 model); L2 STE: The cost of model checking the most abstract (level 2) model of the circuit. Total: the overall cost (all the columns above, including some miscellaneous costs).

The largest circuit verified is approximately 40000 gates. All times shown are in seconds The results above confirm the first experimental results. The method proposed in this paper is feasible and can be used to verify interesting circuits. The computational costs are non-trivial, but even in this implementation reasonable (and as pointed out, better implementations are possible). The costs of the individual components are accurate within 10% of the true figure. The results show that the computational cost scales gracefully with bit-width size (considering the number of state holding components in 19

Bit-width 4 8 16 32

L0 STE 24 55 178 1005

Abstraction 10 10 10 12

L1 STE 34 82 271 1317

L2 STE 66 66 63 60

Total 140 220 533 2402

Table 2: Time to verify benchmark 21 (in seconds) the circuit are quadratically related to bit-width). The L1 model checking is roughly quadratically related to bit-width, and as would be expected the L2 model checking is independent of bit-width. The advantages of this approach were also demonstrated in the example. Identifying the basic properties to be proved (both at the concrete and level 1 abstract model) posed no problems. This circuit contains both combinational and sequential components, and being able to reason about timing locally with the abstraction mechanism being able to deal with the global timing constrains was in my experience a significant improvement over the manual application of the compositional theory reported in [18].

5.7 Debugging As is often pointed out, verification should be used as part of system development – it’s not an oracle that is just used at the end. This means that in practice most verification attempts will fail, and so an important part of the process must be giving the human verifiers clues of why the verification process fails. A verification therefore has an important role as a debugger. An important advantage of model checkers is that if the verification fails, a counter-model can be provided. In the case of STE, a trace is given which shows under which conditions the verification attempt will fail. In our theoretical model, there are two possibilities why a verification attempt may fail: the property may be false of the model; or the model may be too abstract. For both, the VossProver produces information that can be used by the human verifier to detect the problem. This information is suitable for presentation in graphical form. For example, the VossProver has a simple means of producing a timing chart. Figure 10 shows a possible output of this chart for a verification of the multiplier. The human verifier asks for the trace of certain parts of the circuit at various times (in this example, the binary-valued nodes CLK1, CLK2 and Reset, and the integer-valued (or bit-vector valued) nodes C.0.2 and ResultOut. UNC represents U (the bottom value in the lattice). Parameters can be given that affect how these timing charts are displayed. The VossProver has a simple Tcl/Tk interface which can be used for this purpose. How the values are shown depends on the types of the underlying types. There are also some simple non-graphical debugging routines that can be used to provide information when a verification fails. The primary objective of the debugging routines was to help the in the verification of the case studies given above. However, they are also intended as ‘proof of concept’ to show that the methodology presented here can be used in a very practical setting.

6 Conclusion This paper has proposed a new method of constructing and using abstractions of circuit models. First, we prove a set of assertions using model checking. Second, an abstract model is constructed from the set of assertions. Third, the abstract model is model checked. The process of abstracting and model checking the new model can be repeated as often as is necessary. The advantages of this method of abstraction are:

 While the human verifier must have a good knowledge of the circuit to identify the important properties of the circuit, s/he does not need specialised knowledge of theories of compositionality 20

Figure 10: Sample example of graphical display of ‘debugging’ facilities

  

and abstraction as most of the verification process is automated, Detailed gate level or switch level descriptions of the circuit can be used as the starting point. The method is very suitable for circuits where accurate models of time are important, as the process of abstraction loses no timing information. It is possible to distinguish between a property being false of the circuit and the level of abstraction being too high.

The experimental evidence demonstrated in Section 5 is promising. Work needs to be done in extending the theoretical framework and especially in developing efficient algorithms to support it. A number of weaknesses have been identified (discussed in Section 4), which should mean that algorithms that are more powerful and with better performance can be developed. In particular, the the aspects that need improvement are:



The decision procedures in the VossProver (these determine whether two expression are equal). These have evolved over the years and although are fairly robust have been developed ad hoc, which means that they have some idiosyncrasies. As their performance and versatility are critical, it would be desirable for them to be redeveloped cleanly. The worst aspect of these decision procedures is that they are difficult to extend by anyone who is not intimate with the internals of the VossProver.

21



The algorithms described in Section 4.2 can be improved by using a suitable generalisation of one of the well-known string matching algorithms. Although the data structures of FL do not lend themselves to the most efficient implementations, improvements can be made.

At a foundational level, the temporal logic supported at present is relatively inexpressive. More work can be done to increase the expressiveness of the logic supported. This would extend the usefulness of the tool, and I believe is computationally feasible in the current framework. Pragmatically, it is also important to present better support for dubugging purposes to aid the user in the verification process. Acknowledgements: This work was supported in part by grants of the Foundation for Research Development and the Research Committee of the University of the Witwatersrand, Johannesburg.

A Theory for the full logic TL This section presents the theory for the full logic (not just trajectory formula). Section A.1 outlines the theory of STE, and Section A.2 presents the theory of model synthesis.

A.1 The full theory of STE This section first formalises the notion of the sets of minimal trajectories satisfying formulas, and then shows how these sets can be used in verification. If B is a subset of A, then b B is a minimal element of B if no other element in B is smaller than b (i.e. all elements of A smaller than b do not lie in B ).

2

 A, and v a partial order on A, then min B = fb 2 B : if 9a 2 A such that a v b; either a = b or a 62 B g:

Definition A.1 If A is a set, B

Although we shall be comparing sets of sequences, containment is too restrictive, motivating a more general method of set comparison. The statement ‘every trajectory that satisfies g also satisfies h’ implies that the requirements for g to hold are stricter than the requirements for h to hold. Thus, if  is a minimal trajectory satisfying g ,  must satisfy h. Since the requirements for g are stricter than the requirements on h,  need not be a minimal trajectory satisfying h, but there must be a minimal trajectory, 0 , satisfying h where 0 . This is the intuition behind the following definition, which defines a relation over ( ), the power set of .

v

PS

S

Definition A.2 If

S is a lattice with a partial order v and A; B  S , then

A vP B if 8b 2 B; 9a 2 A such that a v b.

h if and only if min h vP min g.

Theorem A.1 If g and h are TL formulas, then g ==

Although computing the minimal sets directly is often not practical, it is possible to find approximations of the minimal sets (they are approximations because they may contain some redundant sequences, i.e. there may be two sequences that are ordered by the partial order in the set when only the lesser of the two need be in the set). The rest of this section shows how to construct two types of approximations to the minimal sets. t (h) is an approximation of the set of minimal sequences that satisfy h, and T t (g ) is an approximation of the minimal trajectories that satisfy g . The importance of these approximations are that (i) t (h) P T t (g ) exactly when g == h, and (ii) there is an efficient method for computing these approximations, which we now turn to.

v



L

v

Definition A.3 (Notation) If A and B are subsets of a lattice on which a partial order is defined, , g (A) continues to represent the range of g , and then A B = a b : a A; b B . If g : similarly, g ( A; B ) = g (A); g (B ) .

q

h

f t 2 i h

2 g i

L!L 22

q

t

q

Note that we write A B rather than A B since although A B is a least upper bound (with respect to P ) of A and B it is not the least upper-bound (this reflects the fact that P is a preorder and not a partial order).

v

Definition A.4 implies that sq

v

(sq ; q) 2 S  Q is a defining pair for a predicate g if g(sq ) = q and 8s 2 S ; g(s) = q

v s.

Definition A.5 If g :

S ! Q then D(g) = f(sq ; q) 2 S  Q : (sq ; q) is a defining pair for gg;

is the defining set of g . Given a formula, g , we can construct a defining sequence set, t (g ), which is a good approximation of the set of minimal sequences (it is a superset thereof). There is also an counterpart, f (g ), which is an approximation of the set of minimal sequences that satisfy g with truth value at least . The details of the construction of these sets can be found elsewhere [18].

f

2

Definition A.6 (Defining sequence set) Let g TL. Define the defining sequence sets of g as (g ) t (g); f (g) , where the (g) is defined recursively by:

h

i

h

i

f

2

> 2

=

def

g

1. If g is simple, t (g ); f (g ) , where q (g ) = sXX : : : : (s; q ) Dg ; or (s; ) Dg . This says that provided a sequence has as its first element a value at least as big as s then it will satisfy g with truth value at least q. Note that q (g) could be empty. 2.

def

(g1 _ g2 ) def = h (g1 ) [  (g2 );  (g1 ) q  (g2 )i t

t

f

f

_

t

Informally, if a sequence satisfies g h with a truth value at least then it must satisfy either g or h with truth value at least . Similarly if it satisfies g h with a truth value at least then it must satisfy both g and h with a truth value at least .

t

3.

f

_

f

(g1 ^ g2 ) def = h (g1 ) q  (g2 );  (g1 ) [  (g2 )i t

t

f

f

This case is symmetric to the preceding one. 4.

(: g) def = h (g);  (g)i f

t

This is motivated by the fact that for q it satisfies g with truth value at least

:

5.

= f ; t,  satisfies g with truth value at least q if and only if

:q.

(Next g) def = shift (g), where shift (s0 s1 : : : ) = Xs0 s1 : : :

s0 s1 s2 : : : satisfies Next g with truth value at least q if and only if s1 s2 : : : satisfies g with at least value q .

6.

(g1 Until g2 ) def = h (g1 Until g2 );  (g1 Until g2 )i, where  ( g1 Until g2 ) def = [1i=0( (Next0 g1 ) q : : : q  (Next(i?1) g1) q  (Nexti g2))  ( g1 Until g2 ) def = 1 qi=0( (Next0g1 ) [ : : : [  (Next(i?1) g1 ) [  (Nextig2 )) Recall that Next k g = g if k = 0 and Next k g = Next Nextk?1 g otherwise. Here we consider the t

f

t

t

t

t

f

f

f

f

until operator as a series of disjunctions and conjunctions and apply the motivation above when constructing the defining sequence sets. The following lemma gives a key property of these defining sets: a sequence,  , satisfies a formula g at least with degree q if and only if there is an element of the defining set q (g ) that is less than or equal to . This proof and proofs of other results stated in this section can be found in [18]. 23

Lemma A.2 Let

g 2 TL, and let  2 S ! .

For

q = t; f , q  Sat(; g) iff 9g 2 q (g) with g v .

The defining sequence sets contain the set of the minimal sequences that satisfy the formula. It is possible to find the analogous structures for trajectories — we can find an approximation of the set of minimal trajectories that satisfy a formula. The defining trajectory sets are computed by finding for each sequence in the defining sequence sets the smallest trajectory bigger than the sequence. Note that the use of the join operation requires that we are computing in a lattice with a top element (hence, the need for Z in a circuit model). Definition A.7 Let 

= s0 s1 s2 : : : . Define  by  () = t0 t1 t2 : : :

(

ti = s0

Y(ti? ) t si 1

where:

when i = 0 otherwise

t0 t1 t2 : : : is the smallest trajectory larger than . s0 is a possible starting point of a trajectory, so t0 = s0 . Any run of the machine that starts in s0 must be in a state at least as large as Y (s0 ) after one time unit. So t1 must be the smallest state larger than both s1 and Y (s0 ). By definition of join, t1 = Y (s0 ) t s1 = Y(t0) t s1. This can be generalised to ti = Y(ti?1) t si. Definition A.8 (Defining trajectory set)

T (g) = hT (g); T (g)i, where T q (g) = f () :  2 q (g)g. t

2

f

2

v

By construction, if  g T q (g) then there is a g q (g) with g  g . T (g) characterises characterising the trajectories that satisfy g . This is formalised in the following lemma. Lemma A.3 Let g with  g  .

v

g by

2 TL, and let  be a trajectory. For q = t; f , q  Sat(; g) if and only if 9 g 2 T q (g)

The existence of defining sequence sets and defining trajectory sets provides a potentially efficient method for verification of assertions such as g == h. The formula g , the antecedent, can be used to describe initial conditions or ‘input’ to the system. The consequent, h, describes the ‘output’. This method is particularly efficient when the cardinalities of the defining sets are small. This verification approach is formalised in Theorem A.4 which shows that comparing the appropriate defining sets tells us when an assertion (e.g. g == h) is true.





Theorem A.4 If g and h are TL formulas, then t (h)

vP T (g) if and only if g==h. t

In our experience, most useful formulas have small defining sequence sets. Indeed the computational limit is not so much the size of the sets, but the complexity of the BDDs needed to represent them.

2

9 2

8 2

v

Definition A.9 If g TL, and  g t (g) such that  t(g); g , then g is known as the g defining sequence of g . If the  is the defining sequence of g , then  g =  ( g ) is known as the defining trajectory of g .

A.2 Model synthesis for the full logic This section extends the theory of Section 3 to the full logic.

24

A.2.1 Generalising the model

hS v i

Let the lattice state space, ; be given – this represents the state space of the circuit. The behaviour ( ). There are function is extended to range from sets of sequences to sets of sequences : ( ) two requirements on behaviour functions:

Y P S !P S

vP S , then Y( ) vP Y( );  v y for all y 2 Yfg



Monotonicity: If S1



Fix-point property:

1

2

2

Y(Y()) = Y().

A next state function is a simple, special case of a behaviour function. A -trajectory is a sequence that is compatible with the behaviour function.

Y

Y-trajectory if, for all i; j , Y(fij g) vP fj g Definition A.11 A set of sequences, S , is a Y -trajectory set if each  2 S is a Y -trajectory. Definition A.10 A sequence,  , is a

A.2.2 Abstractions This section generalises the definitions and results of Section 3.2.

Y Y : S ! ! S ! are behaviour functions, then Y Y (S ) vP Y (S ).

Definition A.12 If 1 ; all 2 -trajectory sets S ,

Y

2

1

1

is an abstraction of

Y

2

if for

2

The next definition is the same as the definition given in Section 3.2. A model model 2 if every assertion true of 1 is also true of 2 .

M

M

M

M hS v i Y M j h i

M

1

is an abstraction of

M = (hS ; v i; Y ) be two models. M j hg==hi

Definition A.13 Let ; ; 1 ) and 1 = ( straction of 2 if =M1 g == h implies that =M2 def

2

def

2

1

is an ab-

These two concepts of abstraction are tied together.

M = (hS ; v i; Y ) and M = (hS ; v i; Y ) be two models. If Y M is an abstraction of M .

Theorem A.5 Let tion of 2 , then

Y

Proof. (1) (2) (3) (4) (5) (6) (7)

j

1

def

1

1

def

2

1

is an abstrac-

2

h i

Suppose =M1 g == h Let  be a 2 -trajectory satisfying For all i; j , 2 ( ij ) P j For all i; j , 1 ( ij ) P j  is a 1 -trajectory  satisfies h

Y

2

Y Y f Y f

gv gv f g

j=M2 hg==hi

g

Y

(2), definition of -trajectory (3), 1 is an abstraction of 2 . (4) (1), (2), (5) (2), (6)

Y

Y

A.3 The synthesis of a model Let

hS ; v i be the state space of the circuit, and YC the concrete behaviour function. Consider the model MC = (hS ; v i; YC ), and let A = fj=MC hgi ==hii: i = 1; : : : ; ng be a set def

def

Y

of of assertions that are true of the model. The next step is to construct a new behaviour function, , that ; ) can now be used for verification. By the property is more abstract than C , and the model ( ; of abstraction, if =M g == h , then =MC g == h .

Y

fj h  i

hS v i Y j h i

25

Y

How the abstraction is computed. Any -trajectory that satisfies gi should satisfy hi . This can be ensured if whenever  is a sequence satisfying gi (i.e. gi  ) it is also the case that all sequences in (  ) satisfy hi (i.e. t (hi ) P ())).

Yf g

v

v Y

y(S ) def = [fqf (hi ) :  (gi ) vP fg; i = 1; : : : ; ng q g :  2 S g: t

t

(3)

The function is monotonic because of the monotonicity of the join operations. The commutativity and associativity of join in the lattice makes this definition oblivious of the ordering of the gi and hi . Now we can define as before.

Y

Y() = Z:Z (y()): def

Y

A

(4)

M

By the construction of , each assertion in will be true of the abstract model ; moreover by STE’s compositional theory, any assertion inferable from will also hold. This means that the constructed model has deductive power.

A

Soundness of the model

M

M M

M

to reason about must be an abstraction of To use C, C . By Theorem A.5, it is enough to show that is an abstraction of C . The proofs show that is an abstraction of any behaviour function that satisfies all the assertions =M gi == hi . In the discussion below, let be fixed, y and be as defined above in Equations 3 and 4:

Y

Y

Y

j h  i M Y Lemma A.6 Suppose M = (hS ; v i; Y ) is a model such that j=M1 hgi ==hii, i = 1; : : : ; n. If  is a Y -trajectory, then y (f g) vP Y (f g). 1

1

1

1

Proof. (1) (2) (3) (4) (5) (6) (7) (8)

y(S ) def = [fqf (hi ) :  (gi ) vP fg; i = 1; : : : ; ng q g :  2 S g Definition of y fg vP Y1(fg) Monotonicity of Y1  (gi ) vP Y1 () )  (hi ) v Y1 () j=M1 hgi==hii fqf (hi ) :  (gi ) vP Y1(fg)g q fg v Y1(fg) (2), (3), definition of join f (hi ) :  (gi ) vP fgg  f (hi ) :  (gi) vP Y1(f)gg (2) qf (hi ) :  (gi ) vP g vP q f (hi ) :  (gi ) vP Y1(f)gg (5) qf (hi ) :  (gi ) vP fgg q fg v q f (hi ) :  (gi ) vP fY1 (fg)gg q fg (6) y() v Y1 () (1), (4), (7) Lemma A.7 Suppose M1 = (hS ; v i; Y1 ) is a model such that j=M1 hgi ==hii, i = 1; : : : ; n. If  is a Y1 -trajectory, then y n (f g)) v Y1 (f g). (1) Let  be a Y1 -trajectory. Lemma A.6 (2) y (f g) vP Y1 (f g) (3) y (y (f g)) vP y (Y1 (f g)) Monotonicity of y (4) Y1 (Y1 (f g)) = Y1 (f g) Fix-point property (5) y (Y1 (f g)) vP Y1 (Y1 (f g)) Lemma A.6, Y1 (f g) a Y1 -trajectory (2),(3),(4) (6) y (y (f g)) v Y1 (f g) Theorem A.8 If M1 = (hS ; v i; Y1 ) is a model such that j=M1 hgi ==hii, i = 1; : : : ; n, then M is t

t

t

t

t

t

t

t

t

t

t

t

t

an abstraction of

t

M. 1

Y

fgv Y fg Yf g v Y f g M

t

t

Proof. (1) Let  be a 1 -trajectory. (2) y (  ) P 1 (  ) (3) (  ) P 1(  ) is an abstraction of 1 (4) is an abstraction of 1 (5)

Y

t

Y

M

Lemma A.6 (2), Lemma A.7 (1), Theorem A.5 (4) 26

t

M

Theorem A.8 not only shows that any results we infer using the constructed model are true of the model C (so our results are sound), but that is the weakest model for which all the given assertions hold.

M

M

B Proof script of verification of Benchmark 17 // Proof of Benchmark 17 of IFIP WG10.4 Suite // Uses STE, model abstraction and high level proof // Scott Hazelhurst, January 1999 // miscelleneous let high_bit = entry_width - 1; // 0..entry_width-1 let max_time = 2*entry_width*entry_width; let circ = make_fsm multiplier;

//----------------

Node, variable declarations

// interior nodes -- the outputs of each stage let RS i = nnode (R_S i); let RC i = nnode (R_C i); let TopBit i = nnode (R_C i); // external nodes let A = nnode AINP; let B = nnode BINP; let SumN = RS entry_width;

let let let let

a b c d

= = = =

(nvar "a"); (nvar "b"); nvar "c"; (nvar "d");

let

partial {n :: int}

= c

;

The next part describes the BDD variable ordering to be used. // BDD variable ordering for each stage of multiplier let

m_bdd_order {n::int} = n = 0 => order_int_1 [b, a] | n=entry_width => order_int_1 [partial n, d] | order_int_1 [b, a, partial n, d];

Now we get to the heart of the proof. This section of code defines the antecedent and consequent for the individual stages. Note that the zeroth stage is special as it does not have a partial sum. let

zero_cond i = (TopBit i) ’= ’0;

// Antecedent for row n of the multiplier

27

let

MAnt {n::int} = n = 0 => During (0,3) ( (A ’= a) and (B ’= b)) | During (0,3) ( (A ’= a) and (B ’= b) and (RS (n-1) ’= (partial (n-1))) and (RC (n-1) ’= d) and ( zero_cond (n-1)) );

// Consequent of row n of the multiplier let res_of_row n = let lhs = (RS n) + (2 ** (n+1))*(RC n) in let rhs = n=0 => a * (b ) | ((partial (n-1))+(2 ** n) * d) + (2 ** n)* a * (b ) lhs ’= rhs; let

MCon {n::int}

=

in

During 3 ((res_of_row n ) and (zero_cond n));

And this leads us to the description of the proof of the individual stages. Note that in doing STE we choose an different BDD ordering for each stage. let

Mthm n = let bdd_order = (m_bdd_order n) in let ant = MAnt n in let con = MCon n in prove_voss_fsm bdd_order circ ant con;

To illustrate this, if the VossProver executes Mthm 1, the following proof results as the output of the prover (this is for the 64 bit-width input). During [(0, 3)] A = a[63-0] and B[1] = b[1] and S0 = c[63-0] and C0[62-0] = d[62-0] and C0[63] = 0 ==>> During [(3, 3)] 4*C1[62-0] + S1 = c[63-0] + 2*d[62-0] + 2*a[63-0]*b[1] C1[63] = 0

and

The next section of code defines the proof of the adder: let

adder_proof = let adder_delay = 2 * entry_width in let post_ant_cond = ( (RS high_bit) ’= (partial high_bit)) and ( (RC high_bit) ’= d) and ( (TopBit high_bit) ’= 0) in let post_ant = During (0,adder_delay) post_ant_cond in let power = 2 ** entry_width in let rhs = ((partial high_bit) + power * d) in

28

let post_con_cond = (RS entry_width) ’= rhs in let post_con = During adder_delay post_con_cond in prove_voss_fsm (m_bdd_order entry_width) circ post_ant post_con;

It may be easier to visualise what this means by seeing the output of the proof of the final stage – this is what is produced when the VossProver executes adder proof : During [(0, 128)] S63 = c[126-0] and C63[62-0] = d[62-0] and C63[63] = 0 ==>> During [(128, 128)] S64 = (c[126-0] + ((2**64)*d[62-0]) )[127-0]

The next section of code now calls all the proofs of the individual stages. let

all_proofs = adder_proof:(map Mthm (0 upto (entry_width-1)));

Construct the new model. let

absmodel = ConstructModel all_proofs;

We define the global antecendent and consequent — our actual goal, and then model check, using the abstract model. let

InputAnts = During (0, max_time) ((A ’= a) and

(B ’= b));

let

OutputCons = let prop_delay = 6*entry_width in During (prop_delay,max_time) (SumN

’= a*b);

let final_proof = abstract_ste absmodel InputAnts OutputCons; final_proof;

C Verification of Benchmark 21 C.1 Basic properties of Cell 0 These are the basic properties of cell 0 (as proved and reported by the VossProver). In this example, the input bit-width size is 4, and the clock cycle is 2000ns. The first five properties are the basic properties of the multiplier that we saw in Benchmark 17. During [(0, 4)] S.0.3 = c[6-0] and C.0.3[2-0] = d[2-0] and C.0.3[3] = 0 ==>> During [(4, 4)] S.0.4 = (c[6-0] + ((2**4)*d[2-0]) )[11-0] During [(0, 4)] Dpath2.0 = a[3-0] and StreamIn.1[0] = b[0] ==>> During [(4, 4)] 2*C.0.0[2-0] + S.0.0 C.0.0[3] = 0

= a[3-0]*b[0] and

29

During [(0, 4)] Dpath2.0 = a[3-0] and StreamIn.1[1] = b[1] and S.0.0 = c[3-0] and C.0.0[2-0] = d[2-0] and C.0.0[3] = 0 ==>> During [(4, 4)] 4*C.0.1[2-0]+S.0.1 C.0.1[3] = 0 During [(0, 4)] Dpath2.0 = a[3-0] and StreamIn.1[2] = b[2] and S.0.1 = c[4-0] and C.0.1[2-0] = d[2-0] and C.0.1[3] = 0 ==>> During [(4, 4)] 8*C.0.2[2-0]+ S.0.2 C.0.2[3] = 0

= c[3-0]+2*d[2-0]+2*a[3-0]*b[1] and

= c[4-0]+4*d[2-0]+4*a[3-0]*b[2]

During [(0, 4)] Dpath2.0 = a[3-0] and StreamIn.1[3] = b[3] and S.0.2 = c[5-0] and C.0.2[2-0] = d[2-0] and C.0.2[3] = 0 ==>> During [(4, 4)] 16*C.0.3[2-0]+S.0.3 C.0.3[3] = 0

and

= c[5-0]+8*d[2-0]+8*a[3-0]*b[3]

and

The last property checks that the adder and the flip-flops work correctly. During [(1500, 2000)] ResultIn.0 = e[11-0] and S.0.4 = f[7-0] and During [(0, 999), (2000, 2999)] CLK2 = T and During [(1000, 1999), (3000, 3999)] CLK2 = F and During [(0, 3999)] Reset_N = T and StoreRes = T ==>> During [(2500, 4000)] ResultOut = (e[11-0] + f[7-0] )[11-0]

C.2 Abstract view of filter This section shows the six basic properties that characterise the circuit. The first four describe the cells. What is shown is the output of the VossProver. During [(1001, 2000)] Dpath2.0 = a[3-0] and StreamIn.1 = b[3-0] and ResultIn.0 = e[11-0] and

30

During [(0, 999), (2000, 2999)] CLK2 = T and During [(1000, 1999), (3000, 3999)] CLK2 = F and During [(0, 3999)] Reset_N = T and StoreRes = T ==>> During [(2500, 4000)] ResultOut = (e + (a[3-0]*b[3-0]) )[11-0] During [(1001, 2000)] Dpath2.1 = a[3-0] and StreamIn.2 = b[3-0] and ResultIn.1 = e[11-0] and During [(0, 999), (2000, 2999)] CLK2 = T and During [(1000, 1999), (3000, 3999)] CLK2 = F and During [(0, 3999)] Reset_N = T and StoreRes = T ==>> During [(2500, 4000)] ResultIn.0 = (e + (a[3-0]*b[3-0]) )[11-0] During [(1001, 2000)] Dpath2.2 = a[3-0] and StreamIn.3 = b[3-0] and ResultIn.2 = e[11-0] and During [(0, 999), (2000, 2999)] CLK2 = T and During [(1000, 1999), (3000, 3999)] CLK2 = F and During [(0, 3999)] Reset_N = T and StoreRes = T ==>> During [(2500, 4000)] ResultIn.1 = (e + (a[3-0]*b[3-0]) )[11-0] During [(1001, 2000)] Dpath2.3 = a[3-0] and StreamIn.4 = b[3-0] and During [(0, 999), (2000, 2999)] CLK2 = T and During [(1000, 1999), (3000, 3999)] CLK2 = F and During [(0, 3999)] Reset_N = T and StoreRes = T ==>> During [(2500, 4000)]

31

ResultIn.2 = a[3-0]*b[3-0]

The remaining two properties show that the weights and data values are arrive at the correct time. They look complicated, but in fact they are quite simple – just long. ClockInfo and During [(28000, 31999)] StreamIn.0 = d[5][3-0] and During [(24000, 27999)] StreamIn.0 = d[4][3-0] and During [(20000, 23999)] StreamIn.0 = d[3][3-0] and During [(16000, 19999)] StreamIn.0 = d[2][3-0] and During [(12000, 15999)] StreamIn.0 = d[1][3-0] and During [(8000, 11999)] StreamIn.0 = d[0][3-0] and During [(0, 3), (10, 44000)] Reset_N = T and During [(4, 9)] Reset_N = F and During [(6000, 7999)] StoreWgt = T and StoreStr = F and SelectWgtStr = F and During [(0, 5999), (8000, 41999)] StoreWgt = F andStoreStr = T and SelectWgtStr = T and During [(0, 11999)] StoreRes = F and During [(12000, 43999)] StoreRes = T ==>> During [(9001, 11000)] StreamIn.1 = d[0][3-0] and During [(13001, 15000)] StreamIn.1 = d[1][3-0] and During [(17001, 19000)] StreamIn.1 = d[2][3-0] and During [(21001, 23000)] StreamIn.1 = d[3][3-0] and During [(25001, 27000)] StreamIn.1 = d[4][3-0] and During [(29001, 31000)] StreamIn.1 = d[5][3-0] and During [(11001, 13000)] StreamIn.2 = d[0][3-0] and During [(15001, 17000)] StreamIn.2 = d[1][3-0] and During [(19001, 21000)] StreamIn.2 = d[2][3-0] and During [(23001, 25000)] StreamIn.2 = d[3][3-0] and During [(27001, 29000)] StreamIn.2 = d[4][3-0] and During [(31001, 33000)] StreamIn.2 = d[5][3-0] and During [(13001, 15000)] StreamIn.3 = d[0][3-0] and During [(17001, 19000)] StreamIn.3 = d[1][3-0] and During [(21001, 23000)] StreamIn.3 = d[2][3-0] and During [(25001, 27000)] StreamIn.3 = d[3][3-0] and During [(29001, 31000)] StreamIn.3 = d[4][3-0] and During [(33001, 35000)] StreamIn.3 = d[5][3-0] and During [(15001, 17000)] StreamIn.4 = d[0][3-0] and During [(19001, 21000)] StreamIn.4 = d[1][3-0] and During [(23001, 25000)] StreamIn.4 = d[2][3-0] and During [(27001, 29000)] StreamIn.4 = d[3][3-0] and During [(31001, 33000)] StreamIn.4 = d[4][3-0] and During [(35001, 37000)] StreamIn.4 = d[5][3-0] ClockInfo and During [(4000, 5999)] StreamIn.0 = w[2][3-0] and During [(2000, 3999)] StreamIn.0 = w[1][3-0] and During [(0, 1999)] StreamIn.0 = w[0][3-0] and During [(0, 3), (10, 44000)] Reset_N = T and During [(4, 9)] Reset_N = F and During [(6000, 7999)] StreamIn.0 = w[3][3-0] and StoreWgt = T and StoreStr = F and SelectWgtStr = F and During [(0, 5999), (8000, 41999)] StoreWgt = F and StoreStr = T and SelectWgtStr = T and During [(0, 11999)] StoreRes = F and During [(12000, 43999)] StoreRes = T ==>> During [(8000, 41999)] Dpath2.0 = w[3][3-0] and

32

Dpath2.1 = w[2][3-0] and Dpath2.2 = w[1][3-0] and Dpath2.3 = w[0][3-0]

ClockInfo is an abbreviation used to describe the clocking information. There are two clocks in the circuit, all ClockInfo says is that they go up and down at prescribed times. Here is the definition in full: During [(0, 999), (2000, 2999), (4000, 4999), (6000, 6999), (8000, 8999), (10000, 10999), (12000, 12999), (14000, 14999), (16000, 16999), (18000, 18999), (20000, 20999), (22000, 22999), (24000, 24999), (26000, 26999), (28000, 28999), (30000, 30999), (32000, 32999), (34000, 34999), (36000, 36999), (38000, 38999), (40000, 40999), (42000, 42999)] CLK1 = F and CLK2 = T and During [(1000, 1999), (3000, 3999), (5000, 5999), (7000, 7999), (9000, 9999), (11000, 11999), (13000, 13999), (15000, 15999), (17000, 17999), (19000, 19999), (21000, 21999),(23000,23999), (25000, 25999), (27000, 27999), (29000, 29999),(31000,31999), (33000, 33999), (35000, 35999), (37000, 37999), (39000, 39999), (41000, 41999), (43000, 43999)] CLK1 = T and CLK2 = F

References [1] M. Aagaard and C.-J.H. Seger. The Formal Verification of a Pipelined Double-Precision IEEE Floating-Point Multiplier. In ACM/IEEE International Conference on Computer-Aided Design, pages 7–10, November 1995. [2] D. Beatty, R.E. Bryant, and C.-J. Seger. Formal Hardware Verification by Symbolic Ternary Trajectory Evaluation. In Proceedings of the 1991 IEEE/ACM Design Automation Conference. IEEE, June 1991. [3] D.L. Beatty. A Methodology for Formal Hardware Verification with Application to Microprocessors. PhD thesis, Carnegie-Mellon University, School of Computer Science, 1993. [4] N.D. Belnap. A useful four-valued logic. In J.M. Dunn and G. Epstein, editors, Modern Uses of Multiple Valued Logic. D. Reidel, Dordrecht, 1977. [5] R.E. Bryant and Y.-A. Chen. Verification of Arithmetic Functions with Binary Moment Diagrams. Technical Report CMU-CS-94-160, School of Computer Science, Carnegie Mellon University, May 1994. [6] J.R Burch, E.M. Clarke, D.E. Long, K.L. McMillan, and D.L. Dill. Symbolic Model Checking for Sequential Circuit Verification. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 13(4):401–424, April 1994. [7] E. Clarke, M. Fujita, and X. Zhao. Hybrid Decision Diagrams: Overcoming the Limitations of MTBDDs and BMDs. Technical Report CMU-CS-95-159, School of Computer Science, Carnegie Mellon University, April 1995. [8] E.M. Clarke, O. Grumberg, and D.E. Long. Model Checking and Abstraction. ACM Transactions on Programming Languages and Systems, 16(5):1512–1542, September 1994. [9] M. Darwish. Formal Verification of a 32-Bit Pipelined RISC Processor. MASc Thesis, University of British Columbia, Department of Electrical Engineering, 1994. [10] D.I. Dill, editor. CAV ’94: Proceedings of the Sixth International Conference on Computer Aided Verification, Lecture Notes in Computer Science 818, Berlin, June 1994. Springer-Verlag. [11] M. Fitting. Bilattices and the Semantics of Logic Programming. The Journal of Logic Programming, 11(2):91–116, August 1991. [12] M.L. Ginsberg. Bilattices and modal operators. Journal of Logic and Computation, 1(1):41–69, 1990.

33

[13] O. Grumberg and D.E. Long. Model Checking and Modular Verification. ACM Transactions on Programming Languages and Systems, 16(3):843–871, May 1994. [14] S. Hazelhurst. Compositional Model Checking of Partially-Ordered State Spaces. PhD thesis, University of British Columbia, Department of Computer Science, 1996. [15] S. Hazelhurst and C.-J. H. Seger. Composing symbolic trajectory evaluation results. In Dill [10], pages 273–285. [16] S. Hazelhurst and C.-J. H. Seger. Model Checking Partially Ordered State Spaces. Technical Report 95-18, Department of Computer Science, University of British Columbia, July 1995. Available by anonymous ftp as ftp://ftp.cs.ubc.ca/pub/local/techreports/1995/TR-95-18.ps.gz. [17] S. Hazelhurst and C.-J.H. Seger. A Simple Theorem Prover Based on Symbolic Trajectory Evaluation and BDD’s. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 14(4):413–422, April 1995. [18] S. Hazelhurst and C.-J.H. Seger. Formal Hardware Verification: Methods and Systems in Comparison. In Kropf [22], chapter Symbolic Trajectory Evaluation. [19] S. Hazelhurst and C.J-H. Seger. An integrated approach to verifying large circuits: A case study. In Third Workshop on Designing Correct Circuits, September 1996. Springer-Verlag electronic Workshops in Computing. [20] R. Hojati and R.K. Brayton. Automatic Datapath Abstraction in Hardware Systems. In Wolper [26], pages 98–113. [21] H. Hungar. Combining Model Checking and Theorem Proving to Verify Parallel Processes. In C. Courcoubetis, editor, Proceedings of the 5th International Conference on Computer-Aided Verification, Lecture Notes in Computer Science 697, pages 154–165, Berlin, July 1993. Springer-Verlag. [22] T. Kropf, editor. Formal Hardware Verification: Methods and Systems in Comparison. State of the Art Survey Lecture Notes in Computer Science 1287. Springer-Verlag, Berlin, 1997. [23] S. Rajan, N. Shankar, and M.K. Srivas. An Integration of Model Checking with Automated Proof Checking. In Wolper [26], pages 84–97. [24] C.-J.H. Seger. Voss — A Formal Hardware Verification System User’s Guide. Technical Report 93-45, Department of Computer Science, University of British Columbia, November 1993. Available by anonymous ftp as ftp://ftp.cs.ubc.ca/pub/local/techreports/1993/TR-93-45.ps.gz. [25] C.-J.H. Seger and R.E. Bryant. Formal Verification by Symbolic Evaluation of Partially-Ordered Trajectories. Formal Methods in Systems Design, 6:147–189, March 1995. [26] P. Wolper, editor. CAV ’95: Proceedings of the Seventh International Conference on Computer Aided Verification, Lecture Notes in Computer Science 939, Berlin, July 1995. Springer-Verlag. [27] Z. Zhu and C.-J.H. Seger. The Completeness of a Hardware Inference System. In Dill [10], pages 286–298.

34