Construction of Verified Software Systems with Program-Checking: An ...

3 downloads 374 Views 237KB Size Report
approach with an application to the construction of veri ed compiler back-ends. ... velopment cycles, the maintenance phase and even minor bugs often spoil the ... scheduling is generated with an unveri ed back-end tool (BEG). For the overall.
Construction of Veri ed Software Systems with Program-Checking: An Application To Compiler Back-Ends Thilo Gaul1 , Andreas Heberle1 , Wolf Zimmermann1, and Wolfgang Goerigk2 1 Institut fur Programmstrukturen und Datenorganisation

Universitat Karlsruhe Zirkel 2, D-76131 Karlsruhe, Germany E-mail: fgaul,heberle,[email protected] 2 Institut fur Informatik und praktische Mathematik Christian-Albrechts-Universitat zu Kiel Preuerstr.1{9, D-24105 Kiel E-mail: [email protected]

Abstract This paper describes how program-checking can be used to

signi cantly reduce the amount of veri cation work to establish the implementation correctness of software systems which may be partly generated by unveri ed construction tools. We show the practicability of our approach with an application to the construction of veri ed compiler back-ends. The basic idea of program-checking is to use an unveri ed algorithm whose results are checked by a veri ed component at run time. Run-Time Result Veri cation in our approach assures formal correctness of the software system and its implementation if partial correctness of the application is sucient. In our example the approach does not only simplify the construction of veri ed compilers because checking the result of the transformations is much simpler to verify than the veri cation of an optimizing code selection. Furthermore, we are still able to use existing compiler generator tools without modi cations. Compiler veri cation plays two roles in this paper: First it closes the gap between veri cation on high-level programming language and the implementation on machine level using a veri ed compiler to translate the veri ed program to machine code. Second it serves as a large-scale case study for software veri cation. This work points out the tasks which still have to be veri ed and it discusses the exibility of the approach.1

1 Introduction Erroneous and buggy software dramatically increases the cost of software development cycles, the maintenance phase and even minor bugs often spoil the con dence in a software system. Besides practical software engineering methods 1 The work reported here has been supported by the Deutsche Forschungsgemeinschaft

(DFG) in the Veri x project

to improve the software development process itself (CASE-tools, ISO-9000, etc. ) there have been proposed several methods for gaining con dence in the output of programs. In Software Testing programs are run on test inputs for which the corresponding - correct - output is known. It is afterwards checked, if the actual output of the program matches the expected output. Testing is an often used technique to improve the trustworthiness of a program, but there neither exist general methods for generating complete test inputs even for small problems, nor are theorems proven about the behavior of the program that passes the test. Program Veri cation [BM81] tries to establish correctness of the program by proving mathematically against a speci cation of the problem. It su ers from the problem, that it is very hard to prove programs correct, and the amount and complexity of proof-work to be done for bigger programs or programs with heuristics and programming tricks is not justi able. Additionally veri cation is usually performed on a high-level implementation language, so the same problem again arises on machine language level. In this paper we propose a checker-based approach to program veri cation, which can be applied if partial correctness of the application is sucient. Compilers and other transformation systems are usually such partial applications, because in standard programming languages it is very easy to provide a program that forces the compiler to terminate due to memory restrictions. The main focus of this paper is put on implementation veri cation by programchecking. The basic idea of program-checking we use in this paper is the following: Given a pre-condition P and a post-condition Q for a program or function f (x) we call f partial correct, if (P (x) ^ de ned f (x)) ) Q(x; f (x)) holds. To be able to formulate the post-condition operational we generalize Q to a so called checker Q that implies Q. We also allow Q to be partial: Q (x; y) ) Q(x; y). We de ne a checked version f of f that delivers the result of f i the checker Q proved the result to be correct, otherwise the result is unde ned. For more details on this approach see section 3 and [GGZ98]. In an implementation f is intended either to deliver the result or to emit an error message. This also means, that the resulting application is \more partial" than the original speci cation of the problem. To achieve a practically useful implementation, the checker itself should be constructed in that way, that it is able to cover all outputs of f . In many cases it is easier to verify the checker Q than to verify the generating algorithm f . We will show in our example that this can be up to 1:15, measured in the number of lines to be veri ed. Compiler veri cation plays two roles in this paper: First it closes the gap between veri cation on high-level programming language and the implementation on machine level. A veri ed compiler can be used to implement the veri ed program in machine code. Second it serves as a large-scale case study for software veri cation. We will concentrate us on the back-end of an optimizing compiler - the code generation to native machine code using a term-rewrite system. The complete implementation of the code selection, register allocation and operation 0

0

0

0

0

0

0

scheduling is generated with an unveri ed back-end tool (BEG). For the overall correctness of the compiler system we rely on the veri cation of the compiler speci cations and the veri cation of the implementation of checker and other parts, that can not be covered by checking. In this paper we do not deal with hardware veri cation or veri cation of the operating system. Though correctness of the base system is essential for the correctness of the whole system, this is beyond the scope of our work.

2 Related Work Our checker approach is closely related to the work of M. Blum et.al. on resultchecking [BK95,WB97] and the ideas of [GG75]. A more detailed discussion of the theoretical aspects of our approach and proofs performed with ACL2 can be found in [GGZ98]. Program checking is already used in compiler construction for checking properties necessary to establish correctness of a transformation. Necula and Lee [NL98] describe a compiler which contains a certi er that automatically checks the type safety and the memory safety of any assembler program produced by the compiler. The certi cation process detects statically compilation errors of a certain kind but it does not establish full correctness of the compilation. Nevertheless, this work shows that program checking can be used to produce ecient implementations with consideration of safety requirements. [PSS98] apply the idea of program checking for translation of nite state automata. They consider reactive systems. However, their assumptions lead to programs which are single loops with a body that implements a function from inputs to outputs. Their approach checks the loop body. Since compilers as well as the di erent modules implement functions and every compiler refuses almost every program, we can apply program checking for the construction of correct compilers.

3 The General Approach Consider a program  with input x and output y. Let P (x) be a precondition of  and Q(x; y) a postcondition. A program  partially correct, i for every x satisfying P (x) either  refuses x or  computes an output y such that Q(x; y) (i.e. fP (x)g y := (x) fQ(x; y)g in Hoare-Triple notation). The idea of program checking can be summarized by the following function  : fun  (x : T ) : T is y :=  (x); if check (x; y) then return y; 0

0

end

0

else abort

The boolean function check  must imply the postcondition Q. The following theorem shows the validity of the approach.

Theorem 1 (Program Checking). Let (x : T ) : T be an unveri ed program without side-e ects, check (x : T; y : T ) : bool be a side-e ect free function 0

0

satisfying fP (x)g z := check (x; y) fz fP (x)g y :=  0 (x) fQ(x; y )g

= true

) Q(x; y )g.

Then, it holds

Proof. We sketch the proof. It can be formal using standard Hoare calculus. Since  is side-e ect free, the input x remains unchanged. If  does not abort, it returns y. The y returned is the same as the input in check (x; y), because check (x; y) is side-e ect free. Furthermore, when y is returned it must hold check (x; y) = true . Hence, it holds Q(x; y). 0

Hence, the only assumption on  is the side-e ect freeness. No further assumptions on  are made. The function  therefore provides a bootstrapping approach to construct partial correct programs. It is useful to apply the approach if the formal veri cation of check  is much easier than that of  or the size of  is much larger than the size of check . However, the diculty is the assumption on the side-e ect freeness of . We will call this property of a program being side-e ect free \wrap"-property. Our de nition of program-checking leads us directly to a modularization of the implementation into outer parts and inner parts : 0

outer parts...



...implement the checker check , and have to be veri ed against it ...have to be implemented veri ed on machine level, i.e. compiled with a veri ed compiler ...perform interaction of several checkers, form \outer" interface of the whole system ...assure functional view on inner parts by wrapping them

  

...are the code pieces to be checked ...implement  unveri ed - in any language ...maybe generated from unveri ed tools

  

inner parts...

Interface (verified)

Wrapper (verified)

Inner Code (unverified)

Glue Code (verified)

Outer Code = Glue + Wrapper + Interface

Figure1. Program Checking on Machine Implementation Level

Figure 1 depicts the layout of machine implementations and their subdivision into inner/unveri ed and outer/veri ed code pieces. Glue code subsumes all functionality of the application that can not be implemented veri ed by checking.

3.1 Implementation Framework Implementation Language

Target System (Machine Language)

CU

(Compilation Process)

Unverified Component

CV

Verified Component

(verified) Wrapper + Interface

CV

CU CU: Unverified Compiler CV: Verified Compiler

Figure2. From Implementation Language Level to Machine Code Figures 2 and 3 depict the three implementation and speci cation levels of our framework together with the compilation steps. Figure 2 shows the translation process from implementation language level to target machine code. Veri ed components - including the checker - have to be compiled with veri ed compilers to assure implementation correctness, unveri ed components might be translated to machine code with unveri ed tools (we will discuss other possibilities in section 4). The same applies to the process of generating the software on implementation level. Inner parts (unveri ed components) might be generated from also unveri ed tools or can be taken from unveri ed libraries. We will use this heavily in our compiler case study. In the following section we discuss approaches for encapsulating  such that side-e ect freeness is guaranteed.

4 Correct Machine Implementation As we stressed in the previous section, to achieve correctness of the whole software system we must be able to treat the inner (checked) parts functionally, that means as functions without side-e ects (wrap property).

Implementation Language

Specification

Bib

Gen

Bib

Unverified Component

Bib

Verified Component

Bib

(verified) Wrapper + Interface

Gen

source program generation

Figure3. From Generator Level to Implementation Language We must assure, that the inner parts can not in uence the other parts in any kind, they even must not be able to change any interaction of outer parts with the environment or the environment itself. If an inner part would be able to change data or even worse if it can change the program of the checker then the correctness of the checker would rely on the correctness of the used compiler and certain restrictions to the language. Supposed the operating system is correct, there are several alternatives to make sure that it is impossible for inner parts to modify the memory of the outer parts. Supposed our implementation language does not allow pointers to the memory, we are able to prove that the implementation behaves safe. Unfortunately, the veri cation of this property is dicult. There are three major possibilities to establish the wrap property of an inner part on machine implementation level: { Veri cation of the binary machine code of the inner parts We could generate the machine code implementation for the inner parts with an arbitrary tool (i.e. compiler/assembler) and verify the wrap properties on machine level. Depending on the complexity of the target machine language this usually spoils the e ort we achieved in reducing the veri cation complexity of the process. This way does not seem to be a practical way for real-world target platforms. { Implementation of inner parts in safe, higher programming language and compilation with veri ed compiler If the implementation language for the inner parts assures for any program that they can not a ect other parts and the interface is proven to behave pure functional, we obtain a wrapped machine code implementation of the inner parts if we compile them with a veri ed compiler (see Figure 4, upper part). For example for C this is possible only for a very restricted language subset. Practical examples are Sather-K or Java. The Software Fault Isola-

To be prevented: Erroneous side effect

Produced by verified Compiler (or objectcode-verified) Verified code Interface Unverified code ("Wrap"-characteristic proved)

Separation by environment:

File-/ Inter-ProcessCommunication/ Operating system/ phys. separated system

Remote-Procedure-Call

Figure4. Establishing \Wrap" Property on Machine Code Level tion technique can be applied similarly.

{ Inner parts wrapped by environment

If we can not prove anything about the external behavior of the inner implementation we must use a \higher" mechanism to assure wrap properties. We consider inner and outer parts as parallel processes with di erent logical memory spaces. This can be done by running the di erent parts on di erent machines. But we can also use the operating system to assure that memory of one process can not be altered by another process (see Figure 4, lower part). Not every operating system can assure such properties (Windows3/95), but modern multi-user/multi-tasking systems can do - like UNIX or WindowsNT. Nevertheless, in both cases we have to verify and implement the protocol on which the two processes communicate.

4.1 Architecture Independent Communication Protocol: AIF

We introduce an abstract interchange format AIF, that serves as the communication protocol in several kinds of interfaces. It is de ned as a textual representation, because coupling on binary formats causes a lot of data mapping problems if di erent implementation languages are used. The format is suitable for tree and graph structures, the main advantage is its target independence. In our implementation of AIF, inner code and checkers communicate by mutual le access, but could also exchange data via process communication or UNIX pipes. Figure 5 shows the interface in more detail for the interface part which transfers results of the inner parts back to the checker. The inner part passes the result to the printer of the interface, which writes the abstract interchange format AIF representation to a le. This le is then

Inner Part

Interface

π (x)

unverified

Printer AIF File

verified Reader

x

y ok Check_ π (x,y)

check error!

Figure5. File interface between inner and outer parts parsed by a veri ed reader in the outer part, which passes the internal result representation to the checker and if the check was successful passes it to the next compilation step. We assume that the implementation language of the inner parts may be di erent to our implementation language for which a veri ed compiler exists. The AIF is the vehicle to close this gap since we transform the interchange format to an internal representation. This representation of the result is reliable when the check has succeeded. This representation should be as simple as possible in order to construct a simple recursive descent parser whose correctness proof is feasible. Though a concrete AIF representation is an instance of a concrete language and is therefore language dependent, we are able to de ne a language independent frame. An AIF data is represented by its root node. A node is speci ed by several terms, it de nes a unique reference, it has a type, and it de nes a list of successor nodes and a list of attributes. Attributes can be references to other nodes (e. g. this is used to encode back edges of a graph). Node ::= [ Label ':' ] Type '(' [ Node ( ',' Node )* ] ')' ',' '(' Attribute ( ',' Attribute )* ')' ';'

The de nition of types is determined by the language de nition of the intermediate language. Actually, the type is encoded by a string. Attributes can be numbers, strings, or references to nodes. Attribute ::= Label | Number | '"' Letter* '"'

Labels always start with `@'. Strings may not begin with `@' Label ::= '@' Number Number ::= Digit Digit*

Digit ::= '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9'

For a precise de nition of AIF see [HG98].

5 Case Study Correct Compiler back-End The most error-prone part of a compiler is the back-end which produces the machine code. In this transformation from machine independent intermediate language to native machine code many complex optimizations are performed and optimizing transformation techniques are used. For the construction of optimizing code selectors several sub-classes of cost controlled term-rewriting are known. Well established techniques are the classes of tree-transducers[ESL89,AG85] and bottom-up-rewrite systems (BURS)[NK96,Pro95]. Implementations of such cost controlled rewrite systems use complex algorithms and heuristics, because - depending on the class of the rewrite system - problems are in NP. Additionally, for a practical compiler we need register allocators and instruction schedulers, which are usually integrated into the generator that produces the code selector implementation from speci cations. Implementation Language Level

Specification Level

Semantics Q

Machine Implementation Level

concr. Prog. Q

abstr. Prog. Q

concr. Prog. Q

Check Check’

Compiling Specifiation

f

f’ Compilation

Semantics Z

concr. Prog. Z

abstr. Prog. Z

Generator

Figure6. Veri cation Architecture Back-End

concr. Prog. Z

In order to prove the practicability of our approach we used the widely accepted - and industrial used - back-end generator tool BEG as the core generator of our back-end framework. One has to keep in mind, that this tool can only generate unveri ed implementations that have to be checked, because we even do not have the sources of the tool. We will see in the measurements section 5.1, that it is also - from a practical point of view - impossible to verify the generated C-code. Our intermediate language is the basic-block-graph oriented representation MIS, a de nition and formal Abstract State Machine (ASM2 ) semantics can be found in [GHHZ96]. The target we chose for this example is the DEC-Alpha architecture, a high-performance RISC workstation processor family. A formal ASM based semantics de nition suitable for the purposes of back-end veri cation can be found in [Gau95]. All our proofs are done w.r.t. these ASM based semantics of intermediate and target language. Figure 6 shows the veri cational view on the concrete instance of our framework to back-end veri cation. Starting point is a complete veri ed compiling speci cation which consists of veri ed transformation rules and correctness constraints for the global rewrite mechanism, register allocation and the scheduling of program trees. Precise de nitions and proofs are given in [ZG97]. The checker veri es those constraints at runtime of the compiler and therefore assures the correctness of the compilation process. The BEG tool stays \outside" the veri cation process, because it does not matter, how we generated the implementation of the corresponding inner parts. Code Select. Specification

BEG (generates)

Inner Part BBG MIS

Code Selection

ABBG

Instr. Schedul.

ABBG Checker

Register Allocation

Transformation OK!

BBG

DECAlpha

Check-Error!

Figure7. Architecture Checked Back-End The correctness notion we use to prove compiler correctness is based on observability of states. A compilation is correct, if it preserves the observable (i.e. input/output) behavior of the source program, in our case the intermediate representation MIS. This gives us the freedom to compile programs not on an one-to-one basis, but to transform programs optimized. For a general discussion about the Veri x compiler correctness approach see [GZG+ 98,ZG97]. 2 An introduction to Abstract State Machines (ASM) can be found in [Gur95]

The back-end tool BEG generates the code selector from a set of term-rewrite rules annotated with costs and mappings to target machine code. The implementation consists of a tree pattern match phase, that tries to nd a cost minimal cover of the program tree w.r.t. the rewrite rules. Afterwards register allocation and code emitting traversals are initiated. In [ZG97] we showed how to decompose the overall correctness of such a rewrite system into local correctness of single term-rewrite rules and global correctness aspects, that deal with the rewrite process, scheduling and register allocation. We use this decomposition to de ne inner and outer parts of the back-end and to formulate the correctness requirements to the checkers. Basic-Block-Graph

P1

Schedule

1 + 1.1

1.2 c

* 1.1.1 c

1

1.1.2 c

1.1

1.2

+ 1.1.1

Register

1.1.2

1.1.2 1.2 1.1.1 1.1 1

Rule

Figure8. Checking Attributed Basic-Block-Graph We are able to encapsulate the complete code selection part with register allocation and scheduling generated by the back-end tool into an inner part. Figure 7 shows the architecture of the checked back-end. Input to the inner BEG generated - part is the basic-block-graph (BBG), output is the annotated graph with rewriting attributes (ABBG) in AIF format. The BBG is annotated with the required rewrite information while the inner process runs. We do not have to know how this annotation is performed3 , we are only interested in the output of the printer (see gure 5). The interface between printer and reader is the annotated BBG (ABBG), and of course reader has to be veri ed. The checker rejects the concrete ABBG at runtime with an error message or passes the graph to the transformation phase, which nally performs the rewrite sequence and emits the target (DEC-Alpha assembler) code. Attributes of ABBG nodes are the rules to be applied, the allocated registers and an order in the 3 In practices we do of course know what the generated parts do, but we are not sure

about their correctness.

schedule in which the tree must be evaluated. The transformation phase has to be veri ed, but is a very simple task now, because it does not have to search for a possible - cost optimal - rewriting, it only performs the transformation. The result is native machine code in assembler format, therefore we still rely on a veri ed assembler/linker to achieve a complete veri ed compiler tool chain. Figure 8 depicts the checking situation for the MIS expression intadd(intmult(intconst(A),intconst(B)),intconst(C))

which is a usual compilation of the source language term (A*B+C)4 . A typical set of code selection rules includes at least one rule for every intermediate language construct and additionally optimizing rules. I: II: III: IV: V:

RULE RULE RULE RULE RULE

intadd(X,Y) -> Z intmult(X,Y) -> Z intconst[C] -> X intmult(intconst[C],X) -> Y intadd(X,intconst[C]) -> Y

{ { { { {

ADDQ X,Y,Z } MULQ X,Y,Z } LDA X,#C(r31) } MULQ X,#C,Y } ADDQ X,#C,Y }

The right column in parenthesis are concrete DEC-Alpha machine instructions that implement the semantics of the intermediate language pattern on the left hand side. The potential optimizations arise from the di erent possibilities to cover the tree, all of the possible covers are correct transformations. In our example the code selection algorithm decided to apply rules IV and V to subtrees 1.1-1.1.1 and 1-1.2 instead of applying the simpler rules I-III (what is in this case obviously the right decision - less instructions are emitted). The decision might be based on complex cost measurements that take instruction scheduling costs into account, but this depends on the concrete back-end generator we use and proving the optimality of this process is not in the scope of our work.

5.1 Measurements We applied our approach to the back-end described in the previous section. Our implementation language for the outer parts is Sather-K, a type-safe objectoriented language [Goo97]. The code generator generator BEG produces a C implementation with 18.000 lines of code, the generator itself tool is written in 35.000 lines of Modula-2 code. Table 1 compares lines of code and program sizes of our example. If we want to verify the implementation traditionally, we have either to verify the generated C code or the generator implementation5 . Applying the checker approach we can use the generated C code unveri ed, but have to verify the additional outer parts like interface (reader), checker and transformation code. If we compare the number of lines to verify on implementation level as an indicator of the expense of veri cation, we obtain a factor of 15 between the generated C code and the implementation of the additional outer parts. This shows the 4 This expression could of course be constant folded, we use this example for simplicity 5 We would even have to use a veri ed C compiler, which is usually not available

Modula/Sather-K Lines Byte Generator BEG (Modula) Generated C Code Impl. MIS Code-Selection To be veri ed, Outer Parts (Sather-K) Industrial: ANDF ) ST9

Binary Prog. Byte

35.000

2 MB

1.1 MB

18.000

400 KB

500 KB

500 (Reader) +300 (Checker) +400 (Transform) 140.000

200 KB 6.4 MB

3.5 MB

Table1. Lines of program code to verify for example back-end feasibility of the program-checking approach, even if we consider that di erent programs can be very di erent in the level of diculty to verify. The last line of Tabel 1 shows the size of an industrial code selector generated with BEG (ANDF)ST9). Checker and transformation part for this application would be also much bigger, but the relation between unveri ed and veri ed code will be even worse. C/Sather-K Lines Byte Generators COCKTAIL Generated C Code Impl. IS-Frontend To be veri ed, Outer Parts (Sather-K)

Binary Prog. Byte

110.000

2.4 MB

1.2 MB

22.000

600 KB

300 KB

500 (Parser) +100 (Compare) +700 (AST)

14 KB 3 KB 30 KB

200 KB

Table2. Lines of program code to verify for a program-checked IS front-end We also applied our approach to a compiler front-end for a C subset IS generated with the compiler toolbox COCKTAIL, Table 2 shows the even better results we achieved there [DGG+ 95,HH98].

6 Conclusions We addressed the problem of veri cation of large-scale software systems with program-checking and presented a concrete veri cation framework. Our approach emphasizes the software engineering aspect, because it bridges the gap between the veri cation of complex software systems and its practical implementation, especially with generators. The main idea is to assure correctness of the implementation by introducing runtime program-checkers that check the result of arbitrary pieces of software. The approach is applicable to problems where partial correctness is sucent. We applied our approach to the construction of veri ed compilers. The focus of this paper lays on the construction of veri ed back-ends partly generated with tools. The proposed compiler construction framework allows to implement veri ed optimizing back-ends down to correct machine code implementation. The result of such a `checked' back-end is native machine code. We want to stress here again, that this checking is completely independent of the interleaving of di erent optimizations for code selection, register allocation and scheduling. The Veri x project is a large scale case study in program veri cation with the major goal to verify not only speci cation and high level implementation of compilers, but also to guarantee the correctness of their nal binary executables on hardware. State of the art compiler construction uses complex and highly sophisticated algorithms in order to produce ecient code. Assuring correctness by checking their results enables us to use these algorithms in our veri ed compiler implementation and even to generate them with available unveri ed compiler generators. This reduces the size of code which has to be veri ed signi cantly. Acknowledgements This work is supported by the Deutsche Forschungsgemeinschaft project Veri x (Construction of Correct Compilers). We are grateful to our colleagues in Veri x.

References [AG85]

Alfred V. Aho and Mahadevan Ganapathi. Ecient tree pattern matching: an aid to code generation. In Brian K. Reid, editor, Conference Record of the 12th Annual ACM Symposium on Principles of Programming Languages, page 334, New Orleans, LS, January 1985. ACM Press. [BK95] Manuel Blum and Sampath Kannan. Designing programs that check their work. Journal of the Association for Computing Machinery, 42(1):269{291, January 1995. [BM81] R.S. Boyer and J S. Moore. The Correctness Problem in Computer Science. Academic Press Inc., London, England, 1981. [DGG+ 95] A. Dold, T. Gaul, W. Goerigk, G. Goos, A. Heberle, F. von Henke, U. Ho mann, H. Langmaack, H. Pfeifer, H. Ruess, and W. Zimmermann. De nition of the Language IS. Veri x Working Paper [Veri x/UKA/1], University of Karlsruhe/Kiel/Ulm, 1995. [ESL89] H. Emmelmann, F.-W. Schroer, and R. Landwehr. Beg - a generator for ecient back ends. In ACM Proceedings of the Sigplan Conference on Programming Language Design and Implementation, June 1989.

[Gau95]

T.S. Gaul. An Abstract State Machine Speci cation of the DEC-Alpha Processor Family. Veri x Working Paper [Veri x/UKA/4], University of Karlsruhe, 1995. [GG75] J.B. Goodenough and S.L. Gerhart. Toward a Theory of Test Data Selection. SIGPLAN Notices, 10(6):493{510, June 1975. [GGZ98] W. Goerigk, T.S. Gaul, and W. Zimmermann. Correct Programs without Proof? On Checker-Based Program Veri cation. In Proceedings ATOOLS'98 Workshop on \Tool Support for System Speci cation, Development, and Veri cation", Advances in Computing Science, Malente, 1998. Springer Verlag. [GHHZ96] T.S. Gaul, A. Heberle, D. Heuzeroth, and W. Zimmermann. An ASM Speci cation of the Operational Semantics of MIS. Veri x Working Paper [Veri x/UKA/3 revised], University of Karlsruhe, 1996. [Goo97] Gerhard Goos. Sather-K | The Language. Software | Concepts and Tools, 18:91{109, 1997. [Gur95] Y. Gurevich. Evolving Algebras: Lipari Guide. In E. Borger, editor, Speci cation and Validation Methods. Oxford University Press, 1995. [GZG+ 98] W. Goerigk, W. Zimmermann, T. Gaul, A. Heberle, and U. Ho mann. Praktikable konstruktion korrekter ubersetzer. In Softwaretechnik '98, volume 18 of Softwaretechnik-Trends, pages 26{33. GI, 1998. [HG98] A. Heberle and T. Gaul. Syntax einer Sprache zur textuellen Reprasentation von Graphen. Interner Bericht, 1998. [HH98] Andreas Heberle and Dirk Heuzeroth. The formal speci cation of IS. Technical Report [Veri x/UKA/2 revised], IPD, Universitat Karlsruhe, January 1998. [NK96] Albert Nymeyer and Joost-Pieter Katoen. Code Generation based on formal BURS theory and heuristic search. Technical report inf 95-42, University of Twente, 1996. [NL98] G. C. Necula and P. Lee. The design and implementation of a certifying compiler. In Proceedings of the 1998 ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI), pages 333{344, 1998. [Pro95] Todd A. Proebsting. BURS automata generation. ACM Transactions on Programming Languages and Systems, 17(3):461{486, May 1995. [PSS98] A. Pnueli, O. Shtrichman, and M. Siegel. Translation validation for synchronous languages. Lecture Notes in Computer Science, 1443:235{??, 1998. [WB97] Hal Wasserman and Manuel Blum. Software reliability via run-time resultchecking. Journal of the ACM, 44(6):826{849, November 1997. [ZG97] W. Zimmermann and T. Gaul. On the Construction of Correct Compiler Back-Ends: An ASM Approach. Journal of Universal Computer Science, 3(5):504{567, 1997.

Suggest Documents