Jan 8, 1992 - class of hard real-time systems and which can be reused as the basis of di erent ..... ronment. For example, a computerized heating controller is reactive if it regularly ...... during the life of the protocol are modelled by a list.
The Formal Veri cation of Hard Real-Time Systems Rachel Mary Cardell-Oliver Queens' College
A dissertation submitted for the degree of Doctor of Philosophy in the University of Cambridge January 1992
Abstract
1
This dissertation investigates the use of formal veri cation to demonstrate the correctness of hard real-time systems, that is, computer systems in which programs are required to respond to events from their environments within realtime deadlines. A mathematical formalism, higher order logic, is used to prove that programs react in a correct and timely manner to identi ed real-time events of their environments. Higher order logic is used both to describe the behaviour of programs written in a simple, imperative program language with asynchronous communication primitives and to specify the environments with which programs interact. It is assumed that the implementation of the program language allows the exact time taken to execute commands to be calculated. It is then proved that a speci cation of programs and environments satis es requirements which are also stated in higher order logic. The HOL system, a theorem prover for higher order logic, has been used to type check speci cations and mechanize veri cation proofs. The main contributions of this dissertation to formal veri cation are techniques for writing generic speci cations and veri cation methods for hard realtime programs. A generic speci cation is one which describes the behaviour of a class of hard real-time systems and which can be reused as the basis of dierent system implementations. Our generic speci cations are modular and hierarchical, enabling the separation of behaviour common to all implementations and behaviour which varies between dierent implementations. A compositional semantics for the real-time program language is proposed and two strategies for verifying hard real-time programs are examined. The veri cation of the class of sliding window protocols provides a nontrivial worked example to illustrate this method. Sliding window protocols transfer data from one processor to another in an environment which provides only an unreliable communication channel between the processors. Other examples presented in the dissertation include an input-output device which interacts with its environment using a handshaking protocol. All the examples have been speci ed in higher order logic and mechanically veri ed in the HOL theorem prover.
2
Declaration
This dissertation is the result of my own work and includes nothing which is the outcome of work done in collaboration. No part of this dissertation has already been or is concurrently being submitted for any degree, diploma or other quali cation at any other university.
Acknowledgements
I would like to express my thanks and gratitude to my supervisor Dr Mike Gordon. He has taught me a great deal about doing research and his supervision has given me just the right mixture of guidance and a free rein. I would also like to thank Dr Ian Chessell and Dr Brian Billard of the Australian Defence Science and Technology Organization (DSTO) for awarding me a scholarship to study at Cambridge and for the workstation and oce which was used for my research. I am grateful to all in the Hardware Veri cation Group of the Computer Laboratory of the University of Cambridge. In particular I would to thank Roger Hale, Juanito Camilleri, Monica Nesi, Jim Grundy and Neil Viljoen for commenting on drafts of this dissertation. Many thanks also to Tom Melham, Mike Gordon and all who worked on the HOL system and its manual. Without their work on HOL the mechanical proofs of this dissertation would have been much, much harder. Je Joyce, John Herbert and participants in the UBC HOL Course held in Vancouver in 1990 and 1991 proved stimulating audiences for presentations of early versions of this work. Professor Roger Needham of the Computer Laboratory at Cambridge helped with various academic and bureaucratic problems. The Cambridge Research Centre of SRI International provided an oce, technical support and coee and biscuits. John Cardell-Oliver has been encouraging, patient and supportive. He maintained my sense of perspective. He also spurred me on at the end and read drafts of this dissertation. I am grateful for nancial support from the Australian Defence Science and Technology Organization, the Australian Committee of the Cambridge Commonwealth Trust and an Overseas Research Studentship. I would also like to thank DSTO, and the Defence sta at Australia House, the Computer Laboratory of the University of Cambridge, the Organizers of the 1991 HOL Meeting in Davis and Queens' College for assistance with attendance at conferences. I would like to thank my examiners, Professor Mathai Joseph and Professor
3 Roger Needham, for their insightful observations and comments.
Contents 1 Introduction
1.1 Problem De nition . . . . . . . . . . . . . . . . . . 1.1.1 Hard Real-Time Systems . . . . . . . . . . . 1.1.2 Speci cation and Veri cation . . . . . . . . 1.1.3 Focus . . . . . . . . . . . . . . . . . . . . . 1.2 Related Work . . . . . . . . . . . . . . . . . . . . . 1.2.1 Program Veri cation . . . . . . . . . . . . . 1.2.2 Speci cation Languages for Hard Real-Time 1.3 Summary . . . . . . . . . . . . . . . . . . . . . . . 1.4 Chapter Outlines . . . . . . . . . . . . . . . . . . .
. . . . . . . . .
2 Higher Order Logic
2.1 The HOL Logic . . . . . . . . . . . . . . . . . . . . . 2.1.1 Types . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Terms . . . . . . . . . . . . . . . . . . . . . . 2.1.3 Typed Terms . . . . . . . . . . . . . . . . . . 2.1.4 Axioms, Inference Rules and Theorems . . . . 2.1.5 Theories . . . . . . . . . . . . . . . . . . . . . 2.1.6 Extending Theories: De nitional Mechanisms 2.1.7 Derived Theorems and Inference Rules . . . . 2.2 The HOL Theorem Proving System . . . . . . . . . . 2.2.1 HOL, ML and Security . . . . . . . . . . . . . 2.2.2 HOL Theories . . . . . . . . . . . . . . . . . . 2.2.3 Writing Proofs in HOL . . . . . . . . . . . . . 2.2.4 Automating Proofs . . . . . . . . . . . . . . . 2.2.5 Libraries . . . . . . . . . . . . . . . . . . . . . 2.2.6 A Proof Example . . . . . . . . . . . . . . . .
3 Methodology
3.1 Characteristics of Hard Real-Time Systems . 3.2 Formal Speci cations . . . . . . . . . . . . . 3.2.1 Histories . . . . . . . . . . . . . . . . 3.2.2 Processes . . . . . . . . . . . . . . . 4
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
6
. 6 . 6 . 7 . 8 . 9 . 9 . 11 . 15 . 16 . . . . . . . . . . . . . . . . . . .
17 17 18 19 21 22 23 23 25 25 26 26 26 27 28 28
32 32 33 33 35
CONTENTS
3.3 3.4 3.5 3.6
3.2.3 Concurrent Processes . . . 3.2.4 Veri cation . . . . . . . . Specifying When Things Happen Veri cation Strategy . . . . . . . 3.4.1 Hierarchical Speci cation . 3.4.2 Modular Speci cation . . 3.4.3 Automating Veri cation . What Has Been Captured? . . . . 3.5.1 Speci cation Caveats . . . 3.5.2 Veri cation Caveats . . . . 3.5.3 Why Verify? . . . . . . . . Summary of the Methodology . .
4 Verifying Programs
5 . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
4.1 Assumptions . . . . . . . . . . . . . . . . 4.2 Syntax . . . . . . . . . . . . . . . . . . . 4.3 Semantics . . . . . . . . . . . . . . . . . 4.3.1 Semantic Domains . . . . . . . . 4.3.2 Expression Semantics . . . . . . . 4.3.3 Command Semantics . . . . . . . 4.4 Mechanization in HOL . . . . . . . . . . 4.5 Veri cation Examples . . . . . . . . . . . 4.5.1 Safety: Preserving Correctness . . 4.5.2 Timeliness: Real-Time Response 4.6 General While Semantics . . . . . . . . . 4.7 Discussion . . . . . . . . . . . . . . . . .
5 Generic Protocol Speci cations
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
5.1 Sliding Window Protocols . . . . . . . . . . . . 5.1.1 Positive Acknowledgement . . . . . . . . 5.1.2 The Alternating Bit Protocol . . . . . . 5.1.3 Sliding Windows . . . . . . . . . . . . . 5.2 Formal SWP Requirements . . . . . . . . . . . . 5.3 Generic SWP Behaviour . . . . . . . . . . . . . 5.3.1 Physical and Logical Structure . . . . . . 5.3.2 What Information is Shared? . . . . . . 5.3.3 Data Messages . . . . . . . . . . . . . . 5.3.4 Acknowledgements . . . . . . . . . . . . 5.3.5 Communication Environment . . . . . . 5.3.6 Verifying Liveness and Timeliness . . . . 5.3.7 Further Assumptions: Sequence numbers 5.3.8 Full SWP Speci cation . . . . . . . . . . 5.4 Veri cation . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
37 38 42 42 42 44 44 44 45 45 46 46
48
48 49 51 51 55 56 60 62 64 67 71 73
75
75 76 76 77 78 80 80 81 82 84 85 86 88 88 89
CONTENTS
6 5.4.1 5.4.2 5.4.3 5.4.4
Safety: Part 1 . . Safety: Part 2 . . Timeliness . . . . Total Correctness
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
6 Protocol Algorithms
6.1 Algorithm Speci cation Strategy . . . . . . . . 6.1.1 Hierarchical Speci cations . . . . . . . . 6.1.2 Modular Speci cations . . . . . . . . . . 6.2 Algorithm Speci cation Examples . . . . . . . . 6.2.1 A Family of Channels . . . . . . . . . . . 6.2.2 Protocol Buers . . . . . . . . . . . . . . 6.2.3 Timeouts . . . . . . . . . . . . . . . . . 6.2.4 Window Sizes . . . . . . . . . . . . . . . 6.3 A More Generic SWP Speci cation . . . . . . . 6.3.1 Formal Speci cation of GGEN . . . . . . 6.3.2 Verifying the GGEN REQ Theorem . . . 6.3.3 Verifying the Alternating Bit Algorithm 6.3.4 Verifying Other SWP Algorithms . . . . 6.4 Discussion . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
7 Protocol Implementations
7.1 Verifying Safety Properties . . . . . . . . . . . . . 7.1.1 Specifying Safety Properties . . . . . . . . 7.1.2 Safety Rules for Program Veri cation . . . 7.1.3 Safety Veri cation Examples . . . . . . . . 7.2 Verifying Timeliness Properties . . . . . . . . . . 7.2.1 Specifying Timeliness Properties . . . . . . 7.2.2 Timeliness Rules for Program Veri cation 7.2.3 Timeliness Veri cation Example . . . . . . 7.3 Discussion . . . . . . . . . . . . . . . . . . . . . .
8 Conclusion
8.1 Programs . . . . . . . . . . . . . . 8.2 On the Size of Proofs . . . . . . . . 8.2.1 Speci cation Style . . . . . 8.2.2 Proof Strategies . . . . . . . 8.2.3 Synthesis . . . . . . . . . . 8.3 Executing Veri ed Programs . . . . 8.4 On the role of Rigorous Veri cation
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
90 94 95 97
100 100 101 102 103 103 105 109 110 110 111 113 113 115 115
117 118 118 119 123 126 126 126 127 129
130 130 132 132 133 133 133 134
CONTENTS
A Proofs
A.1 Veri cation of the AB ALG GGEN theorem A.1.1 Proof of the AB ALG INV lemma . . A.1.2 Proof of the INV CI lemma . . . . . A.2 Proof of the SEQ RULE Theorem . . . . . .
7 . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
141
141 141 148 150
Chapter 1 Introduction This dissertation investigates the use of formal veri cation to demonstrate the correctness of hard real-time systems. A mathematical formalism is used to demonstrate that programs react in a correct and timely manner to identi ed real-time events of their environments.
1.1 Problem De nition 1.1.1 Hard Real-Time Systems A reactive computer program is one which is driven by interaction with its environment. For example, a computerized heating controller is reactive if it regularly monitors temperature and adjusts its output of heat. Systems consisting of one or more reactive programs which must meet deadlines are called hard real-time systems. An example of a deadline which a reactive heater might be required to meet is to turn its heating element on within 60 seconds of the temperature falling below some minimum. It is the requirement to meet deadlines which distinguishes hard real-time systems from other time dependent systems. The behaviour of elements of the environment with which a hard real-time program must react are outside its control. A heating controller can not prevent temperature falling on a cold day or rising on a hot day, it can only react to these changes. On the other hand, the reaction of the heating controller aects future temperatures and so there is an on going interaction as each part, program and environment, aects the other [KK88]. Experience suggests that hard real-time systems are dicult to implement correctly since the behaviour of one or more programs, all reacting to unpredictable events and meeting real-time deadlines, is complex, and a great deal of 1
The system is not reactive if it simply turns its heating element on and o for xed periods without reference to the ambient temperature. 1
8
1.1. PROBLEM DEFINITION
9
attention to detail is required in their design and development [RvH89]. Furthermore, there are many hard real-time systems whose failure to react as expected could have catastrophic eects. For example, the software used in ight controllers, automatic braking systems, and automated factory control is safety critical [MoD91]. Given the current state of the art, the eort required to produce acceptably reliable hard real-time systems is usually only nancially justi ed for those systems whose potential failure is unacceptable on grounds of safety or economic repercussions. 2
1.1.2 Speci cation and Veri cation Establishing the correctness of hard real-time systems by formal veri cation involves modelling and formal proof. In the instance of the heater example three modelling steps are necessary: 1. describe how the temperature is aected by the heater element; 2. state how the heater element behaves; 3. formally specify the fast reaction of the system to falling temperature. In general, establishing the correctness of hard real-time systems involves both speci cation and veri cation. Speci cation is the description, in some mathematical language, of what the system is intended to do, and how this is to be achieved. Veri cation is the process of proving that the system does, in fact, achieve its intended task. The relationship between the correctness of hard real-time systems and formal speci cation and veri cation is illustrated in Figure 1. A system and its required behaviour will initially be described in ordinary prose and perhaps program code. The vertical arrows of Figure 1 represent speci cation steps. A speci cation is a mathematical description which serves to model a physical system. The lower horizontal arrow represents a veri cation proof. The detailed examination of a system required in order to specify and verify it forces a veri er to convince himself that the formal activity of speci cation and veri cation does, in fact, capture the behaviour of the physical system in which he is interested and that the system does achieve its intended task. In order to specify a hard real-time system formally a veri er must decide upon a mathematical language to represent what the system must do and how this is achieved. A formal speci cation language may be an established language of mathematics such as set theory, higher order logic or a temporal logic. In this case the proof theory of that language can be used for veri cation. For example, if the programs and environment of a hard real-time system are de ned by the logical predicate A and the property we wish to verify is speci ed by B, then we can verify that A satis es B by proving the theorem that A logically implies B. Alternatively, a special purpose speci cation language such as CSP [Hoa85] may be de ned with its own proof rules for veri cation. One of the motivations for the research reported in this dissertation was the author's experience implementing communication protocols for a distributed le system. 2
CHAPTER 1. INTRODUCTION
10 hard real-time systems: programs + environments
?
specification
?
formal system model
. . conviction . . . . . . . . . of . .. correctness
properties to verify
specification verification
-
?
formal requirements
Figure 1.1: Relationship between speci cation, veri cation and \real world" systems A formal speci cation is created by \translating" an ordinary prose description of a system into the formal language chosen for speci cation. There are generally no xed rules for this translation: it is a matter of interpretation and experience. However, in the case of hard real-time programs a formal semantics can be used to translate from program code to statements in the formal speci cation language. Given speci cations of the programs and environments of a system and the properties required of it, a veri cation proof is constructed using the proof rules of the chosen speci cation language. In summary, the steps necessary to formally verify a hard real-time system are 1. Determine a speci cation language and its proof rules; 2. De ne a program semantics: this gives the meaning of executing a program; 3. \Translate" a description of a system and the properties to be veri ed into the speci cation language chosen in step 1; use the semantics of step 2 to \translate" any programs; 4. Verify that the system satis es its required properties; 5. Veri cation usually involves backtracking over steps 3 and 4. The nal veri cation proof provides a formal argument as evidence for the correctness of the physical system under consideration.
1.1.3 Focus
Having described the hard real-time systems in which we are interested and the general methods of formal speci cation and veri cation we now turn to the focus of this dissertation within this area.
1.2. RELATED WORK
11
We have been primarily concerned with issues of veri cation. The correctness of hard real-time systems is established by strictly formal proof. Practically, this condition means that proofs must be written with machine assistance since, for the problems we have considered, the size of veri cation proofs, and hence the number of formal proof steps involved, is too large to be performed manually. The type of properties we wish to verify are hard real-time properties. For example, that a particular program will always respond to some stimulus from its environment after at least X seconds but no more than Y seconds; that if the time elapsed between successive inputs is always greater than M then a program will output an exact copy of its input stream; that a program does not livelock ; that a system is always either waiting for input or producing output. The class of programs which are veri ed are distinguished by being written in a simple, imperative programming language designed for veri cation. Programs communicate asynchronously with their environments and, through their environments, with other programs. Our speci cation language and program semantics make explicit the mechanics of communication. In order to verify that programs meet hard real-time deadlines the exact times taken to execute their commands must be known. For this reason our language has no commands which could cause a program to wait inde nitely. Also, in this dissertation we have not considered execution environments with limited resources which are shared by process scheduling. Although execution times could be calculated for such environments this could only be done at the expense of considerably complicating the semantics of program execution and therefore veri cation proofs. Methods for managing rigorous veri cation proofs were investigated since the size of \real world" systems typically makes their veri cation long and complex and their speci cations dicult to read and to understand. The eectiveness of our techniques for veri cation management were tested on a range of examples from an input-output device to a class of communication protocols. 3
1.2 Related Work Our goal is the veri cation of hard real-time systems containing programs. Techniques for the formal veri cation of hard real-time programs are described in Section 1.2.1. In Section 1.2.2 we discuss existing formal speci cation languages for hard real-time systems. Livelock means that a system is caught in a loop in which no progress can be made. A related property, deadlock, does not arise in our model because programs are not permitted to wait inde nitely for their environments except using a loop command. 3
12
CHAPTER 1. INTRODUCTION
1.2.1 Program Veri cation The programming language we have used for implementing hard real-time programs is based on the SAFE language designed as part of the SAFEMOS project. SAFEMOS is a joint project between INMOS, SRI Cambridge, and the Universities of Cambridge and Oxford which aims to demonstrate the possibility of totally veri ed systems: veri ed programs, compiled to machine code by a veri ed compiler and executed on a veri ed processor. Formal semantics for SAFE are given by Hale [Hal90a, Hal90b] and Gordon [Gor91]. Hale's semantics [Hal90b] has been used to verify that a compiler correctly realises the semantics but has not been used to verify SAFE programs. The language used by Gordon [Gor91] has a simpler input and output model than our language but is in other respects similar. Its semantics is given in terms of the machine language for a simple processor. A program logic to be used for the veri cation of programs is currently being developed [Gor91]. Kearney et al have proposed a method for the veri cation of real-time procedural code [KSA91]. Kearney's model, like ours, uses functions of time to model the behaviour of hard real-time systems. Real-time systems in Kearney's model consist of multiple tightly coupled processes communicating through a shared address space. In our model processes are loosely coupled and communicate through their environments using asynchronous input and output operations. Processes are speci ed in Kearney's model using a low level assembler language whereas the language we propose in Chapter 4, although in the same spirit, has high level commands for loops and branching commands. Kearney et al report that the design of mechanical support for veri cation is in progress [KSA91]. The semantics proposed by Hale [Hal90b], Gordon [Gor91] and Kearney et al [KSA91] and the semantics used in this dissertation share the assumption that programs are executed on a processor with known timing behaviour. This approach to modelling time is called ne-grained time. In particular, the mechanics of communication are made explicit in our model. Similar approaches to communication (only) are considered by Koymans et al [KVdR83] and Schlichting et al [SS84] who give semantics for asynchronous message passing. Upper and lower bounds for the timing behaviour of other communication primitives such as semaphores are considered by Shaw [Sha89b]. A dierent approach, called coarse-grained time, has been used by many authors for specifying real-time programs. For example, lower and upper bound estimates of computation time are proposed for programming languages in which concurrency [Haa81] and synchronous communication [Hoo87, Ost89, Heh89] are built into the the language. The idea of a coarse-grained time model is to abstract from the complexities of execution environments. In this case a program's veri cation proof may be valid for a range of dierent scheduling policies and communication primitives. Synchronous communication is used because it has proved useful for reasoning about concurrent programs [Hoa85]. However, the
1.2. RELATED WORK
13
cost of this abstraction is that it is dicult to calculate tight execution bounds for programs. Our approach diers in that, because we focus on the problem of verifying tight execution bounds, we have made our model for program execution as simple as possible, making explicit all choices about the time taken to execute programs. Boyer, Green and Moore report the use of the Boyer-Moore theorem prover to verify a real-time program for keeping a vehicle on a straight line course in a variable crosswind [BGM82]. This is the rst example of which we are aware using a mechanical theorem prover to verify programs that interact with environments. As in our model, functions of time are used to represent the vehicle's speed, the wind speed and so on. However, in Boyer et al the time grain in which change is measured is that of program loop traversals: in each time step the program calculates a new speed parameter for the vehicle. In our model we consider a much ner time grain related to hardware clocks. Our use of ne-grained time enables the detection of properties such as race conditions and inputs which are missed when a program responds too slowly. Such errors may go undetected if the time grain of a model is too coarse. There is a class of programming languages which are described as real-time languages in that they may be used as simulators for hard real-time systems [PDNJ87, BC84, Mos86]. These languages, although useful for the development of hard real-time systems because they can be used for prototyping, are outside the scope of this dissertation since they simulate the times at which events could happen rather than actually reacting to events in real-time.
1.2.2 Speci cation Languages for Hard Real-Time
This section considers existing formal languages which could be used for specifying and verifying hard real-time systems. We also consider the suitability of these languages for the large speci cations and veri cation proofs of nontrivial systems.
Higher Order Logic Higher order logic has been widely used for the speci cation and veri cation of time dependent hardware [HD86, Gor86, Her86, Joy89, Mel89]. We noted that the idea of specifying a system by timed observations, which is used in that work, is also suitable for describing hard real-time software systems. However, this method has not previously been used to describe the semantics of time dependent programs. In higher order logic the heater response requirement could be speci ed by 4
Although higher order logic is used for the semantics of Hale [Hal90a, Hal90b] and Gordon[Gor91] these semantics do not use the timed observation model which is under consideration here. 4
CHAPTER 1. INTRODUCTION
14
8
t. (temp t) t'. (t
9