Verifying Arithmetic Hardware In Higher-order Logic - HOL ... - CiteSeerX

0 downloads 0 Views 800KB Size Report
What we will do in the remainder of this paper is il- lustrate the use of .... lists, sum list is equal to reduce (+) 0 list”, i.e.. I- Vlist. sum list .... Vlist7 list;?. product(APPEND list1 list2) = (produict list 7) ... lists is the same as reducing with f the two lists joined together, i.e. ... list-INDUCT ASSUME-TAC THEN REPEAT. S'TRIP-TAC) ; ...
Verifying Arithmetic Hardware in Higher-Order Logic Shiu-Kai Chin Syracuse University Syracuse, New York 13244-4100

-

Abstract Theorem-based design uses logical inference rather than simulation to determine or verify the properties of design implementations. The initial effort to make such an approach practical is large when compared to conventional simulation. However, the cost of this effort is typically incurred only once. The hardware descriptions are parameterized so that the verification results are applicable to an entire set of designs rather than just one instantiation. To illustrate these ideas, the logical structure used to verify arithmetic hardware in HOL is outlined. In particular, the role of data abstraction, recursion, and induction is shown.

hide the implementation details and focus on properties, e.g. behavioral descriptions and relationships between inputs and outputs. The keys to design reuse are: Design specification. Design verification. The ability to compose designs like functions. Higher-order designs. Computer-aideddesign tools. What we will do in the remainder of this paper is illustrate the use of the above keys.

1. INTRODUCTION Design automation is one of the comer stones of VLSI design. What makes design automation software so crucial is the practical benefit of reuse. In software, reuse is “the reapplication of a variety of kin& of knowledge about one system to another similar system in order to reduce the effort of development and maintenance of that other system”, [l]. Software, and design automation software in particular, is essentially a “construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions”, [2]. The challenge is “the specification, design, and testing of this conceptual construct, not the labor of representing it and testing the fidelity of the representation”, [2]. Higher-order logic provides a means for documenting design knowledge in an executable and formally verified manner which supports the reuse of design knowledge. Reuse is one method of leveraging past work to produce new and larger systems. In its simplest and most narrow form, reuse is a library of predefined components. In a larger sense, reuse means the reapplication of design knowledge to create another system, [l], i.e. reusable concepts or parameterized designs, [4]. Various designs created using the same concepts would expectably have similar properties and hopefully be known to be correct according to some correctness criteria, [3]. What are reusable concepts? Essentially, reusable concepts are abstractions of objects. The abstractions

2. SPECIFICATION AND VERIFICATION

One reuse paradigm which has existed since the early 70’s is TTL design based upon the Texas Instruments 7400 MSI circuit family. What makes TTL-design work is the TTL Data Book, [7]. More precisely, the physical book itself is not important, but the functional abstractions of the transistor circuits and the transistor-level implementationscontained inside the book, are. For example, Figure 1 shows the transistor-level description of a 74305 inverter. While containing only four transistors, one diode, and four resistors, it still has a fair amount of electrical information accompanying it, namely current levels and switching times as shown by the tables next to the circuit schematic. While the implementation description in Figure 1 is quite detailed, what is used more often by designers is the specification description of the 74S05 shown in Figure 2. Specifically, the relationship between output Y and input A as given by Y = -A, and the precise identification of inputs and outputs. The underlying factor which makes TTL-design so powerful is the implicit knowledge that for any object in the TIL Databook, that object’s implementation and specification descriptions are the same or equivalent. Designers can use the specified behavior in their designs and rarely, if ever, have to consider the physical transistor-level implementations of the TTL objects them selves. The interchange of specifications for implementations

22 0-8186-2460492 $03.00 Q 1992 IEEE

venience, bracket notation for finite lists is used where list elements are enclosed by “[“and ”]”and separated by “:” . Thus, [ 71 means cons 7 [I [ 7;2;3lmeans cons 7 (cons 2 (cons 3 [J) The notion of summing a list of numbers is expressed recixsively by: sumU= 0 sum (cons num list) = num + (sum lisf) The above definition can be interpreted to mean sum list = if NULL(/ist) then U else HD(/ist) + (sum (TL(lis4)) where NULL tests for the empty list, HD picks the first element of a list, e.g. HD([1;2;3]) = 1, and TL returns all but the first element of a list, e.g. TL([1;2;3]) = [2;.3]. While the above definition satisfies the problem at hand, it is an instance of a more general procedure which recursively applies a function to a list of elements. A more general and higher-order “glue” function can be written called reduce which has two parameters, 0 - the binary operator applied to the list elements, and x - the value returned by reduce (0)x when applied to an empty list. (reduce (0)x) []= x (reduce (0)x)(cons el list) = e l 0 ((reduce (0)x) list) The sum function is implemented by: sum = reduce (e) U While using reduce may seem like extra work, consider specifying the function product as the function which multiplies all its list elements together: product [I= 7 product (cons num list) = num x (product lisf) We can reuse reduce to implement product by: product = reduce (x) 1 R.eturning to our inner product example, if the inner pro’duct terms are represented by a list of pairs, e.g. [(xO,yO);(xl,yl); ... ;(xn,yn)]where “,”denotes the infix pair operator and FST(x,y) = x and SND(x,y) = y, then the function product-pairs can be defined as: product-pair 1= [/ product-pair (cons pair list) = cons ((FST pair)x(SND paid) (productjair list) The inner product program innergroduct is just the composition of productgair and sum: innergoduct = sum o productjair where “0”denotes functional composition, i.e. (sum o pro(ductgair)list = sum (productgair list). While the above examples are simple, hopefully they

un’derpinsthe entire standard cell approach to design. It also is one of the underpinnings of hierarchical design.

3. FUNCTIONAL COMPOSITION AND :HIGHE:R ORDER DESIGNS If we continue to examine =-design as a successful instance of reusability, we find that another key to its success is functional composition and the implicit support for higher-order inputs and outputs, i.e. subsystem components or modules can be created and debugged se]mately and then joined or “glued” together to form the whole system. For example, if the inner product were to be computed, it would be reasonable to first compute all the product terms in one subsystem or module and sum them in anlother module. (Ofcourse, modularity is not a new idea. Nevertheless, underneath modularity is some notion of how components can be combined or composed to form new components and programs. As Hughes says in [6]: “When writing a modular program to solve a problem, one first divides the problem into subproblems, then solves the subproblems and combines the solutions. The ways in which one can divide up the original problem depend directly ton the ways in which one can glue solutions tog et her.” ”One can appreciate the importance of glue by an analogy with carpentry. A chair can be made quite easily by making the parts seat, legs, back and so on - and sticking them together in the right way. But this depends on the ability to make ‘joints and wood-glue. Lacking that ability, the only way to make a chair is to carve it in one piece out a solid block of wood, a much harder task.”

-

]Hughes’s view is that functional composition and higher-order functions are the glue - i.e. features which support functional abstraction and reusability. The utility of these features is exemplified by a program written in a higher-order and parameterized style which sums lisits of numbers. First, we can define the notion of a list by: (a)list ::= [I I cons(a (a)list) which means that a list of elements of type a is either empty, denoted by [I, or is a nonempty list constructed from an element of type a and another list of type (a)list, where ( x names the set of objects from which the list

elements are drawn, e.g. natural numbers, booleans, etc. The function which actually constructs a list given an element of a and another list is denoted by cons. For con-

23

double quotes “””. $+ is the prefix version of +. # is the HOL system prompt. The theorem-prover function new-recursive-definition false list-Axiom ‘reduce’ means that the higher-order logic expression which follows is 1) recursive, 2) is not an infix definition, 3) is based on the recursive structure of finite lists given by list-Axiom, and 4 ) should be assigned the name reduce in the database which makes up the definitions and theorems. Next, we prove that the specificationand implementation of sum are the same. First, the goal is specified.

illustrate in spirit how higher-order parameterized functions and functional composition support reuse by gluing together other functions. To continue with Hughes’s carpentry metaphor, once we have the parts and the glue, we have to put them together in the “right way”. While writing programs in the declarative style shown above is helpful, documenting the properties of programs is essential for reuse. After all, how can something be reused confidently if its function is unknown or unverified? To illustrate how verification is done, we offer two examples.

# s e t g o a l ( [ 3, “V/list: (num)list. sum list = reduce ( $ + ) 0 list”) ;; ”vlist. sum list = reduce $+ 0 list” The empty list in the s e t g o a l line indicates that

3.1 A First Order Example Returning to our previous definition of sum, we may wish to fc &iallyprove that the specification of sum given by: sum []= U sum (cons num list) = num + sum list is in fact equivalent to its implementation, sum = reduce (+) 0 Thus, we would want to prove the theorem “or all lists, sum list is equal to reduce (+) 0 list”, i.e. I- Vlist. sum list = reduce + 0 list where r /- t means given the (possibly empty) list of logical terms r is true, then t is also true. While the proof of this theorem is straightforward and is easily done be hand, to manage the complexity of design, the proof and the theorem must be machine-executable and machine-verifiable to make the definitions and properties of those definitions available and reusable by other designers and in later parts of the design. As an example of how this is done, we show a sample automated proof in HOL, [51. First, the definitions of sum and reduce are introduced as axiomatic definitions. #let sum = new-recursive-definition false list-Axiom ’sum‘ ‘(sum [ I = 0) / \ (sum (CONS (n:num) nlist ) = n + (sum nlist))“;; #let reduce = new-recursive-definition false list-Axiom ’reduce’ ” (reduce (f: n u m - h u m - h u m ) (x:num) [I

the theorem is to be proved with no other assumptions. To prove the goal for all finite lists, we use induction on lists to break the goal into its base case and inductive case. #expand(LIST-INDUCT-TAC);; OK.. 2 subgoals “Vh. sum(C0NS h list) = reduce $+ O(C0NS h list)” [ “sum list = reduce $+ 0 list” 3 ”sum[l = reduce $+ 0 [I ” By rewriting the goal sum[] = reduce $+ 0[] using

the definitions of sum and reduce, we can immediately prove the base case. #expand(REWRITE-TAC

goal proved I - sum[] = reduce $+ 0[1 Previous subproof: ‘“dh, sum(C0NS h list) = reduce $+ O(C0NS h list)“ [ “sum list = reduce $+ 0 list”

I

We can also rewrite the inductive case using the definitions and then finish the proof by substituting the inductive hypothesis into the rewritten goal. #expand(REWRITE-TAC

[sum;reduce]);;

OK..

=

‘vh. h + (sum list) = h + (reduce $+ 0 list)” [ “sum list = reduce $+ 0 list” 3 #expandf (ASM-REWRITE-TAC 1 ) ;; OK.. goal proved . I - Vh. h + (sum list) =

x)

/\

,

[sum;reducel);;

OK..

I

(reduce (f) x (CONS el list) = (f el ((reduce (f) x) list)))”;;

The definitions in higher-order logic appear within the

24

h

+

We can prove the above using HOL system as follows. First, we strip-off the universally quantified variablesf and z and use induction on 11.

(rleduce $+ 0 list)

. I-

Vh. sum(C0NS h list) = reduce $+ O(C0NS h list) I - V'list. sum list -reduce $+ 0 list

#expand(STRIP-TAC THEN STRIP-TAC THEN INDUCT-THEN list-INDUCT ASSUME-TAC THEN REPEAT S'TRIP-TAC) ;; #3K..

Having proved that the specification and implementation of sum are the same, the definitions and the correctness theorem are available and, in a mechanical sense, are "knowrr" by the system and can be used and built upon by other designers or designs. In particular, for all inpats, the simpler specification can be substituted for the implementation. We note here that virtually the same proof can be done showing that product is equivalent to its implementation reduce (x) 1. In fact, the same proof code can be reused with the appropriate substitutions: e.g. product for sum, etc.

2 Gibgoals

"reduce f z(APPEND(C0NS h 1 1 ) 1 2 ) = f (reduce f z(C0NS h 11)) (reduce f z 12)" [ " V l Z . ("x. f 2 x = x) / \ ('da b c. f a ( f b c) = f(f a b)c) 3

(reduce f z(APPEND 11 12) = f (reduce f z 11) (reduce f z 12)) " ] [ "Vx. f 2 x = X N 1 [ '"da b c. f a(f b c) = f(f a b)c" I "reduce f z(APPEND[]12) = f (reduce f z[l) (reduce f z 12)" [ "Vx. f z x = x" ] [ "Va b c. f a(f b c) = f(f a b)c" ]

3.2! A Higlher-Order Example \Ne mentioned previously the importance of functional composition and higher-order functions and how they simplify the creation of larger programs. Functional cornpositioin and higher-order functions also simplify the verification task. For example, consider the function APPEND which concatenates two lists together, e.g. APPEND [1 ;2;3] [4;5] = [1;2;3;4;5]: APPEIND[]list = list APPEIND (cons x xs) list = cons x (APPEND xs list) We intuitively understand that with respect to sum and product the following statements are true: Vlist7 list2. sum(APPEND list7 list2) = (sum listl) + (sum list2) Vlist7 list;?. product(APPEND list1 list2) = (produict list 7) x (product list2) We could1 do two separate proofs as we did in section 4.1, but as both sum and producr have the same underlying implementation in reduce, we instead prove a higher-order theorem about reduce and specialize it to the specific cases of s u m and product. In effect, the verification of the above two properties is simplified by reusing the properties of reduce. The property we prove about reduce is: I- vf Z / I 12. (Vx. f z X = X) A (Va b c. f a(f b c) = f(f a b)c)3 (reduce f z(APPEND I 7 12) = f(recluce f z ll)(reduce If z 12)) In other words, if z is the identity element for f, i.e. Vx.f z x = .x, and iff is associative, i.e. Vu b c.f a ( f b c) =j'lfa b)c, then applying f to the reduction of the two lists is the same as reducing with f the two lists joined together, i.e. reduce f z(APPEND 11 12) = f(reduce f z 1l)l'reducefz 12).

The base case, reduce f z(APPEND[]12), is proved by rewriting using the definitions of APPEND and reduce, and by rewriting using the assumption ["Vx.f z x = x"]. #expand (ASM-REWRITE-TAC [reduce;APPEND]) ;; OK..

goal proved I - reduce f z(APPEND[]12) = f (reduce f z[l) (reduce f z 12)

.

Tlhe inductive case is proved by rewriting using the definitions of APPEND and reduce, and by rewriting using two terms in the assumption list we list here as A and B: A. (reduce f z(APPEND 11 12) = f(reduce f z ll)(reduce f z 12)) obtained by using the inference rule Modus Ponans on the conjunction of 1) V a b c.f a l f b c ) =f l f a b)c and 2) Vx. f z x = x with 3) Vl2.(Vx.f z x = x ) A (Vu b c..fa ( f b c ) =f'fa b)c) 3 (reducef z(APPEND 11 12) = f(rethce f z ll)(reducef z 12)), and B. Vu b c. f a ( f b c ) = fCfa b)c. #expand ( P OP-AS SUM-L IST :)Lthl. REWRITE-TAC [ (MP(SPEC-ALL (el 3 thl)) (CONJ (el 2 thl) (el 1 thl))); (el 1 thlll);; ##OK.. goal proved

25

4. BUILDING A FORMAL FRAMEWORK In Section 3 we showed two simple verification exam-

I - V f z 11 12. (Vx. f z x = x) / \ ( V a b c. f a(f b c) = f(f a b)c) 3 (reduce f z (APPEND 11 12) = f (reduce f z 11) (reduce f z 12)) If we give the theorem the name reduce-APPEND, we can specialize it to the cases for sum and product as shown below. Notice that the preconditions now depend on showing that 0 and I are the identity elements for + and * and that + and * are associative, where * denotes multiplication. #let thl = SPECL ['$+";"O"l reduce-APPEND;; thl = I - V l l 12. ( V x . 0 + x = x) / \ (Va b c. a + (b + c) = (a + b) + c) ; (reduce $+ O(APPEND 11 12) = (reduce $+ 0 11) + (reduce $+ 0 12)) #let th2 = SPECL ["$*";"l"I reduce-APPEND;; th2 = I- V l l 12. (Vx. 1 * x = x) / \ (Va b c. a * (b * c) = (a * b) * c) ; (reduce $ * l(APPEND 11 12) = (reduce $ * 1 11) * (reduce $ * 1 12)) Fortunately, the HOL system already has the necessary theorems about + and *, so all that is necessary is to rewrite thl and th2 with the theorems for + and *, and the previously proved theorems equating sum with reduce $+ 0 and product with reduce $* 1. The results are theorems th3 and th4. #let th3 = REWRITE-RULE [ADD-CLAUSES;ADD -ASS0C;sum-theorem] thl; ; th3 = I - vl1 12. sum(APPEND 11 12) = (sum 11) + (sum 12) #let th4 = REWRITE-RULE [MULT-CLAUSES;MULT-ASS0C;product-theorem] th2; ; th4 = I - Vll 12. product(APPEND 11 12) = (product 11) * (product 12) Thus, we have proved the two desired properties by supplying the appropriateparameters to the higher-order theorem relating reduce and APPEND.

ples. What we did was show the equivalence between different expressions where one expression could be thought of as a specification expression and the other as an implementation expression. As the examples were quite small, we were able to reason about them without using any additional structure. We now consider a slightly larger example similar to [3] and [8] to illustrate the type of formal framework necessary for larger problems. The problem we consider is the creation of a computer-aided-design program which will correctly create an array of half adders to sum a column of bits. We would like to this program to 1) work for columns of arbitrary length, 2) be able to produce a wide variety of adder array structures, and 3) be "correct". What this example will show is the need to spend as much effort defining an appropriate framework to reason about the program as well as the program itself. Key to this framework are the definitions of value functions which map boolean terms and structures to the natural numbers. These value functions enable us to define precisely what "correct" means.

4.1 Basic Definitions Figure 3 shows the basic half adder cell where a and b are the inputs for the half adder, and where So and CO are the sum and carry outputs. Since we are using logic values to model numbers, we need to establish a mapping from the booleans to the natural numbers. We do this by defining a value function named BV which interprets the value of a bit. I- Vbit. BV bit = (bit + 7 I 0) The above notation means for all bits, the expression BV bit can be replaced by bit -+ I 1 0 and vice versa where a + b 1 c means if a is true then b else c. Thus, BV maps T to 1 and F to 0 in the expected way. Also, we need to explicitly state our interpretation of correct behavior. First, we define the outputs in terms of the inputs. I- V a b. HASUM(a,b) = -a A b v a A -b 1- Va b. HACARRY(a,b) = a A b I- V a b. HA(a,b) = HASUM(a,b),HACARRY(a,b) The above three formulas define the individual sum and carry functions and the entire half adder cell is viewed as a pair where the first element of the pair is the sum output and the second element is the carry output. In fact, we can formally define the notion of what is meant by "the sum and carry outputs" of an adder by defining two accessor functions So and CO as follows: 1- VX. SO x = FST x I-VX.COX=SNDX One way to view the half adder cell is to view it as a transformer, i.e. a function which rewrites inputs in one

26

applies it to a list of input pairs and produces a pair of lists as output where the first list is the list of sum outputs and the second list is the list of carry outputs. For example, M U - H A [(F,F);(F, T);( T,F);(T,T)] will produce internally: list = [(F,F);(T,F);(T,F);(F,T)]; sumlist = [F;T;T;F]; and carrylist = [F;F;F;T]. Since HA operates on pairs of input bits, for convenience we define a value function for pairs of bits. I- Vpair. PAIRBITVAL pair = (BV(FST pair)) + (BV(SND pair)) At this point we can prove the following theorem based on the definitions we already have. MAP-HA-CORRECT = 1- Vpairs. (COLVAL (FST(MAP-HA pairs)))+ (2 x (COLVAL(SND((MAP-HA pairs)))))= (reduce $+ 0 (MAP PAIRBITVAL pairs)) Basically, the theorem says that the value of the sum column plus twice the value of the carry column equals the tium of all the values of the input pairs. It is proved by induction on the list pairs and by rewriting using the definitions. 4.3 Higher-Order Design Functions

form into outputs with a different but equivalent representation. This gives rise to the following theorem HACORRECT which can be proved by boolean case analysis on each input, a and b. HACORIRECT = 1.- Va b. (BV a) + (BV b) = (BV(So(HA(a,b))))+ (2x (BIV(Co(HA(a,b))))) What HACORRECT says is for all inputs a and b, the sum of their bit values equals the sum of the bit value of the sum output plus two times the bit value of the carry output. Notice that HACORRECT allows us to substitute the more cctmplex expression (BV(So(HA(u,b))))+ (2 x (BV(Co(HA(a,b)))))involving the implementation function HA with a simpler expression of its behavior, (BV a) -t- (BV b), which does not contain the implementation function at dl.

4.2 Intermediate Definitions Pit this point, we have all of the atomic definitions and we can generalize them to larger operations and structures. First, we can define the value of a column of bits in terms of ithe individual bit values. 1- (COLVAL[]= 0)A (Vh t. COLVAL(C0NS h t) = (BV h) + (COLVAL t ) ) The value function COLVAL IS primitively recursive. Its lbase case says the value of the empty column is 0 and its inductive case says that the value of a non-empty column is the bit value of the first element of the column, h, plw the coliumn value of the remainder of the column, t. Nolice that we could have defined COLVAL using the higlher-order function reduce and the MAP function we define next. We would then have been able to deduce its properties when composed with other functions like APPElVD. For brevity, we have used the simpler definition above. The MAP function is a highel-order function like reduce. It takes a functionf and applies it to each element of a list. I- (Vf. MAP f[]= [fi A (W h t. MAP f(C0NS h t) = CONSl(f h)(MAPf 0) From the definition above we see for example that MAP BV[T;T;F;T] = [I;I;O;I]. Using MAP we can generalize the application of HA onto lists of pairs. I- Vpairs.MAP-HA pairs = (let list = MAP HA pairs in let suml~st= MAP So list in let carry,,rist= MAP CO list in sumlist,carrylist) Thus, the MAP-HA function takes a half adder and

We now turn to generating interconnection structures for the array of half adders. We define a higher-order function named HACOLTRANS which has as inputs: 1) split - a function which takes a column of bits and produces a pair of lists by splitting the column into two pm,: a list of pairs produced from the first 2n elements of the column, and the remainder of the column, e.g. column [O;x1;x2;x3;x4] could be split into ( [ ( x l ~ ~ l ) ; ( x 2 , X 3 ) ] , [ ~24) /mergesum ); - a function which combines two sum columns; 3 ) mergecurry - a function whic:h combines two carry columns; 4) sumcol - an input sum column; and 5) carrycol -

Suggest Documents