Modeling Dataflow Programming in PowerEpsilon Ming-Yuan Zhu CoreTek Systems, Inc. 1107B CEC Building 6 South Zhongguancun Street Beijing 100086 People’s Republic of China E-Mail:
[email protected]
Contents I
An Introduction to PowerEpsilon
2
1 Introduction 3 1.1 What is a Proof? . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 What is a Correct and Perfect Proof? . . . . . . . . . . . . . . . 5 1.3 Mathematics and Mathematical Logic . . . . . . . . . . . . . . . 5 1.3.1 Mathematical Theorem Proving as Computer Programming 5 1.4 Formal Reasoning of Parallel and Real-Time Systems . . . . . . . 7 1.5 Long Live Mathematics . . . . . . . . . . . . . . . . . . . . . . . 7 1.6 About This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Type Theory and PowerEpsilon 10 2.1 Type Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.1.1 Types in Computer Science . . . . . . . . . . . . . . . . . 10 2.1.2 Intuitionistic Logic . . . . . . . . . . . . . . . . . . . . . . 11 2.1.2.1 Nonconstructive Mathematics . . . . . . . . . . 11 2.1.2.2 Constructive Mathematics . . . . . . . . . . . . 12 2.1.2.3 Deductions as Computations and Computations as Deductions . . . . . . . . . . . . . . . . . . . 12 2.1.2.4 Mechanization of Higher-Order Logic . . . . . . 14 2.1.3 Martin-L¨of’s Type Theory . . . . . . . . . . . . . . . . . . 14 2.1.4 Natural Deduction Systems . . . . . . . . . . . . . . . . . 15 2.1.4.1 Deduction System . . . . . . . . . . . . . . . . . 15 2.1.4.2 Natural Deduction System . . . . . . . . . . . . 15 2.1.5 Should Types be Modeled by Calculi or Algebra? . . . . . 15 2.1.6 Philosophical View of Types . . . . . . . . . . . . . . . . . 16 2.1.6.1 Realist View . . . . . . . . . . . . . . . . . . . . 16 2.1.6.2 Intuitionist View . . . . . . . . . . . . . . . . . . 17 2.1.7 Polymorphic Functional Languages . . . . . . . . . . . . . 17 2.1.7.1 Edinburgh ML . . . . . . . . . . . . . . . . . . . 17 2.1.7.2 HOPE . . . . . . . . . . . . . . . . . . . . . . . . 17 2.1.7.3 SASL, KRC, Miranda . . . . . . . . . . . . . . . 18 2.2 What is PowerEpsilon . . . . . . . . . . . . . . . . . . . . . . . . 18
1
2.3
2.4
2.5
2.6 2.7
2.8
II
Syntax of PowerEpsilon . . . . . . . . . . . . . . . . . . . . . . . 2.3.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2 Lexicon . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.3 Syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.4 Currying and Higher-Order Terms . . . . . . . . . . . . . 2.3.5 Abstract Syntax . . . . . . . . . . . . . . . . . . . . . . . Type Inference Rules of PowerEpsilon . . . . . . . . . . . . . . . 2.4.1 Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2 Judgements . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.3 Inference Rules . . . . . . . . . . . . . . . . . . . . . . . . 2.4.4 Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.5 Γ-Terms, Γ-Types and Γ-Propositions . . . . . . . . . . . 2.4.6 Type Conversion Rules . . . . . . . . . . . . . . . . . . . 2.4.7 Cumulativity Relation . . . . . . . . . . . . . . . . . . . . 2.4.8 Curry-Howard Isomorphism . . . . . . . . . . . . . . . . . 2.4.8.1 From Types to Programs: The Constructive Mathematics . . . . . . . . . . . . . . . . . . . . . 2.4.8.2 From Programs to Types: The Type Inferences . 2.4.9 Applications of PowerEpsilon . . . . . . . . . . . . . . . . Formal Logic in PowerEpsilon . . . . . . . . . . . . . . . . . . . . 2.5.1 Logical Operators Defined by PowerEpsilon . . . . . . . . 2.5.1.1 Logical Implication . . . . . . . . . . . . . . . . 2.5.1.2 Free and Bound Variables . . . . . . . . . . . . . 2.5.1.3 Universal Quantified Formula . . . . . . . . . . . 2.5.1.4 Existential Quantified Formula . . . . . . . . . . 2.5.2 Logical Operators Derived from PowerEpsilon . . . . . . . 2.5.2.1 Logical Negation . . . . . . . . . . . . . . . . . . 2.5.2.2 Logical Conjunction . . . . . . . . . . . . . . . . 2.5.2.3 Logical Disjunction . . . . . . . . . . . . . . . . 2.5.2.4 Equivalence Relation . . . . . . . . . . . . . . . 2.5.3 Proof by Contradiction . . . . . . . . . . . . . . . . . . . 2.5.4 Informal and Formal Theorem Proving . . . . . . . . . . . General Schema of Investigating Algbraic Structures . . . . . . . Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.1 Injection, Surjection and Bijection . . . . . . . . . . . . . 2.7.2 Binary Operations . . . . . . . . . . . . . . . . . . . . . . Sets and Operations . . . . . . . . . . . . . . . . . . . . . . . . .
Modeling Dataflow in PowerEpsilon
19 19 19 19 21 23 24 24 24 24 27 27 27 27 29 31 31 31 34 34 34 34 34 34 34 35 35 36 37 37 38 38 39 39 40 41
44
3 Synchronous Dataflow Model 45 3.1 Reactive Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2
3.1.1
3.2
3.3
3.4 3.5 3.6
3.7
Transformational and Interactive Systems Versus Reactive Systems . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Features of Reactive Systems . . . . . . . . . . . . . . . . Synchronous Model . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Synchronous versus Asynchronous Systems . . . . . . . . 3.2.2 Why Synchronous Modeling . . . . . . . . . . . . . . . . . 3.2.3 Synchronous Hypothesis . . . . . . . . . . . . . . . . . . . 3.2.4 Synchronous Programming . . . . . . . . . . . . . . . . . 3.2.4.1 Event-Driven Scheme . . . . . . . . . . . . . . . 3.2.4.2 Sampling Scheme . . . . . . . . . . . . . . . . . 3.2.4.3 State Transition Automata . . . . . . . . . . . . 3.2.5 Synchronous Languages . . . . . . . . . . . . . . . . . . . Dataflow Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Dataflow Approach . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Control Flow versus Data Flow . . . . . . . . . . . . . . . 3.3.2.1 Control Flow Languages . . . . . . . . . . . . . . 3.3.2.2 Dataflow Languages . . . . . . . . . . . . . . . . Synchronous Dataflow Model . . . . . . . . . . . . . . . . . . . . Continuously Operating Programs . . . . . . . . . . . . . . . . . The Problems with the Imperative Approach . . . . . . . . . . . 3.6.1 Complexity of Imperative Languages . . . . . . . . . . . . 3.6.2 Lack of Precise Specifications . . . . . . . . . . . . . . . . 3.6.3 Unreliability . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.4 Inefficiency . . . . . . . . . . . . . . . . . . . . . . . . . . PowerEpsilon as a Dataflow Language . . . . . . . . . . . . . . . 3.7.1 Infinitary Programming . . . . . . . . . . . . . . . . . . . 3.7.1.1 Defining Dataflow as a Strong Sum . . . . . . . 3.7.2 Lazy Evaluation . . . . . . . . . . . . . . . . . . . . . . . 3.7.3 Formal Reasoning of Real-Time Programs . . . . . . . . .
4 Modeling of Dataflow Programming in PowerEpsilon 4.1 Data Structure Stream . . . . . . . . . . . . . . . . . . . . . . . 4.2 Operators on Streams . . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Constructors of Streams . . . . . . . . . . . . . . . . . . 4.2.2 Testing Functions . . . . . . . . . . . . . . . . . . . . . . 4.2.3 Selection Functions . . . . . . . . . . . . . . . . . . . . . 4.3 Other Operators on Streams . . . . . . . . . . . . . . . . . . . . 4.3.1 Interleaving Operator . . . . . . . . . . . . . . . . . . . 4.3.2 Hiding Operator . . . . . . . . . . . . . . . . . . . . . . 4.3.3 Parallel Operator . . . . . . . . . . . . . . . . . . . . . . 4.3.3.1 Strong Parallel Operator . . . . . . . . . . . . 4.3.3.2 Weak Parallel Operator . . . . . . . . . . . . . 4.3.3.3 Strong Parallel Operator for Different Streams 4.3.3.4 Weak Parallel Operator for Different Streams . 3
. . . . . . . . . . . . .
45 46 46 46 47 47 47 47 48 48 49 49 49 50 50 51 52 52 53 53 53 53 54 54 54 54 54 55 56 56 56 56 57 57 58 58 58 59 59 59 59 60
4.3.3.5 Or Parallel Operator . Split Operator . . . . . . . . . . 4.3.4.1 Strong Split Operator . 4.3.4.2 Weak Split Operator . . 4.3.4.3 Fairness Split Operator 4.3.5 Followed by Operator . . . . . . 4.3.6 WHENEVER Operator . . . . . 4.3.7 UPON Operator . . . . . . . . . 4.3.8 ASA Operator . . . . . . . . . . 4.3.9 Arrow Operator . . . . . . . . . 4.3.10 Pipe Operator . . . . . . . . . . 4.3.11 Conditional Operator . . . . . . 4.3.4
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
5 Modeling of Synchronous Dataflow Programming in PowerEpsilon 5.1 Flows and Clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Operators on TStream . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Data Flow Operators . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.1 Interleaving Operator . . . . . . . . . . . . . . . . . . . . 5.3.2 Hiding Operator . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Parallel Operator . . . . . . . . . . . . . . . . . . . . . . . 5.3.3.1 Strong Parallel Operator . . . . . . . . . . . . . 5.3.3.2 Weak Parallel Operator . . . . . . . . . . . . . . 5.3.3.3 Strong Parallel Operator for Different Streams . 5.3.3.4 Weak Parallel Operator for Different Streams . . 5.3.4 Or Parallel Operator . . . . . . . . . . . . . . . . . . . . . 5.3.5 Split Operator . . . . . . . . . . . . . . . . . . . . . . . . 5.3.5.1 Strong Split Operator . . . . . . . . . . . . . . . 5.3.5.2 Weak Split Operator . . . . . . . . . . . . . . . . 5.3.5.3 Fairness Split Operator . . . . . . . . . . . . . . 5.3.6 Pipe Operator . . . . . . . . . . . . . . . . . . . . . . . . 5.3.7 Conditional Operator . . . . . . . . . . . . . . . . . . . . 5.4 Temporal Operators . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.1 PRE Operator . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2 FBY Operator . . . . . . . . . . . . . . . . . . . . . . . . 5.4.3 Initialization Operator . . . . . . . . . . . . . . . . . . . . 5.4.4 Current Operator . . . . . . . . . . . . . . . . . . . . . . . 5.4.5 When Operator . . . . . . . . . . . . . . . . . . . . . . . . 5.4.6 Principles for Timed Streams . . . . . . . . . . . . . . . .
4
60 60 60 61 61 62 62 63 63 63 64 64
66 66 66 67 67 69 70 70 70 70 71 71 71 71 72 73 73 74 74 74 75 76 77 78 78
6 Programming in Dataflow Languages 6.1 Hamming’s Problem . . . . . . . . . . . . . . 6.2 Prime Problem . . . . . . . . . . . . . . . . . 6.3 Boolean Streams . . . . . . . . . . . . . . . . 6.3.1 Intersection Operator . . . . . . . . . 6.3.2 Union Operator . . . . . . . . . . . . . 6.3.3 Negation Operator . . . . . . . . . . . 6.3.4 Implication Operator . . . . . . . . . . 6.3.5 Conditional Operator . . . . . . . . . 6.4 Dining Philosopher Problem . . . . . . . . . . 6.4.1 Problem Description . . . . . . . . . . 6.4.2 A Dataflow Solution in PowerEpsilon . 6.4.3 An Imperative Solution in C . . . . . 6.4.4 A Comparison of Two Solutions . . . 6.5 A Calculator in Dataflow Model . . . . . . . 6.6 Dataflow Sorting . . . . . . . . . . . . . . . . 6.6.1 Bubble Sort . . . . . . . . . . . . . . . 6.6.1.1 Bubble Sort, Version 1 . . . 6.6.1.2 Bubble Sort, Version 2 . . . 6.6.1.3 Bubble Sort, Version 3 . . . 6.6.2 Merge Sort . . . . . . . . . . . . . . . 6.6.3 Quick Sort . . . . . . . . . . . . . . . 6.7 Real-Time Programming . . . . . . . . . . . . 6.7.1 Watchdogs . . . . . . . . . . . . . . . 6.7.2 Kilometer Meter . . . . . . . . . . . . 6.7.2.1 Dataflow Model . . . . . . . 6.7.2.2 Calculating Cycle Time . . .
III
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
79 . 79 . 82 . 83 . 83 . 84 . 84 . 84 . 84 . 85 . 85 . 86 . 87 . 90 . 90 . 92 . 92 . 92 . 93 . 94 . 95 . 96 . 97 . 97 . 99 . 100 . 101
Semantics of Dataflow Languages
7 Synchronous Dataflow Language SCADE/LUSTRE 7.1 Synchronous Model . . . . . . . . . . . . . . . . . . . . . 7.1.1 Zero-Time Computation Assumption . . . . . . . 7.1.2 The Synchronism Hypothesis . . . . . . . . . . . 7.2 Dataflow Model . . . . . . . . . . . . . . . . . . . . . . . 7.3 Synchronous Dataflow Model . . . . . . . . . . . . . . . 7.4 SCADE/LUSTRE Language . . . . . . . . . . . . . . . 7.4.1 LUSTRE Program Structure . . . . . . . . . . . 7.4.2 Some Programming Examples . . . . . . . . . . . 7.4.2.1 Kilometer Meter . . . . . . . . . . . . . 7.4.2.2 Linear Systems . . . . . . . . . . . . . . 7.4.2.3 Non-Linear and Time-Varying Systems 7.4.2.4 Logical Systems . . . . . . . . . . . . . 5
102 . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
103 103 103 104 105 105 106 107 109 109 109 110 110
7.4.2.5
Mixed Logical and Signal Processing Systems . . 112
8 Abstract Syntax of SCADE/LUSTRE 8.1 Lexicon of SCADE/LUSTRE . . . . . . . . . . . . . 8.1.1 Identifiers . . . . . . . . . . . . . . . . . . . . 8.1.2 Identifier List . . . . . . . . . . . . . . . . . . 8.1.3 Binary Operators . . . . . . . . . . . . . . . . 8.1.3.1 Type Definition of Binop . . . . . . 8.1.3.2 Constructors . . . . . . . . . . . . . 8.1.3.3 Predicate Functions . . . . . . . . . 8.1.4 Unary Operators . . . . . . . . . . . . . . . . 8.1.4.1 Type Definition of Unop . . . . . . . 8.1.4.2 Constructors . . . . . . . . . . . . . 8.1.4.3 Predicate Functions . . . . . . . . . 8.2 Syntax of Expressions . . . . . . . . . . . . . . . . . 8.2.1 Expressions . . . . . . . . . . . . . . . . . . . 8.2.1.1 Type Definition of Expression . . . 8.2.1.2 Constructors of Expression . . . . 8.2.1.3 Predicate Functions of Expression 8.2.1.4 Projectors of Expression . . . . . . 8.2.2 Expression List . . . . . . . . . . . . . . . . . 8.2.2.1 Type Definition of Expr list . . . . 8.2.2.2 Constructor of Expr list . . . . . . 8.2.2.3 Projectors of Expr list . . . . . . . 8.3 Syntax of Equations . . . . . . . . . . . . . . . . . . 8.3.1 Equations . . . . . . . . . . . . . . . . . . . . 8.3.1.1 Type Definition of Equation . . . . 8.3.1.2 Constructor of Equation . . . . . . 8.3.1.3 Projectors of Equation . . . . . . . 8.3.2 Equation List . . . . . . . . . . . . . . . . . . 8.3.2.1 Type Definition of Eq list . . . . . 8.3.2.2 Constructors of Eq list . . . . . . . 8.3.2.3 Predicate Functions of Eq list . . . 8.3.2.4 Projectors of Eq list . . . . . . . . 8.4 Syntax of Nodes . . . . . . . . . . . . . . . . . . . . 8.4.1 Types . . . . . . . . . . . . . . . . . . . . . . 8.4.1.1 Type Definition of Types . . . . . . 8.4.1.2 Constructors of Types . . . . . . . . 8.4.2 Input Parameters . . . . . . . . . . . . . . . . 8.4.2.1 Type Definition of Input par . . . . 8.4.2.2 Constructors of Input Par . . . . . 8.4.2.3 Predicate Function of Input Par . . 8.4.2.4 Projectors of Input Par . . . . . . . 8.4.3 Input Parameter List . . . . . . . . . . . . . . 6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
113 114 114 114 114 114 114 116 117 117 117 118 118 119 119 119 125 130 137 137 137 137 137 137 137 137 138 138 138 138 138 138 139 139 139 139 139 139 139 140 140 140
8.4.3.1 Type Definition of Input par list . . 8.4.3.2 Constructors of Input par list . . . . 8.4.3.3 Predicate Functions of Input par list 8.4.3.4 Projectors of Input par list . . . . . 8.4.4 Output Parameters . . . . . . . . . . . . . . . . . 8.4.4.1 Type Definition of Output par . . . . . 8.4.4.2 Constructors of Output Par . . . . . . . 8.4.4.3 Predicate Function of Output Par . . . 8.4.4.4 Projectors of Output Par . . . . . . . . 8.4.5 Output Parameter List . . . . . . . . . . . . . . 8.4.5.1 Type Definition of Output . . . . . . . 8.4.5.2 Constructors of Output list . . . . . . 8.4.5.3 Predicate Function of Output list . . 8.4.5.4 Projectors of Output list . . . . . . . 8.4.6 Header of Nodes . . . . . . . . . . . . . . . . . . 8.4.6.1 Type Definition of Header Node . . . . 8.4.6.2 Constructors of Header Node . . . . . . 8.4.6.3 Projectors of Header Node . . . . . . . 8.4.7 Local Declaration . . . . . . . . . . . . . . . . . . 8.4.7.1 Type Definition of Local . . . . . . . . 8.4.7.2 Constructors of Local . . . . . . . . . . 8.4.7.3 Predicate Function of Local . . . . . . 8.4.7.4 Projectors of Local . . . . . . . . . . . 8.4.8 Local Declaration List . . . . . . . . . . . . . . . 8.4.8.1 Constructors of Local . . . . . . . . . . 8.4.8.2 Predicate Function of Local . . . . . . 8.4.8.3 Projectors of Local . . . . . . . . . . . 8.4.9 Node Definition . . . . . . . . . . . . . . . . . . . 8.4.9.1 Type Definition of Node Def . . . . . . 8.4.9.2 Constructors of Node Def . . . . . . . . 8.4.9.3 Projectors of Node Def . . . . . . . . . 8.4.10 Node Definition List . . . . . . . . . . . . . . . . 9 Semantics of SCADE/LUSTRE 9.1 Semantic Objects and Functions . . . . . . . . . . 9.1.1 Values . . . . . . . . . . . . . . . . . . . . . 9.1.1.1 Type Definition of SDVal . . . . . 9.1.1.2 Constructors of SDVal . . . . . . . 9.1.1.3 Projectors of SDVal . . . . . . . . 9.1.2 Value List . . . . . . . . . . . . . . . . . . . 9.1.2.1 Type Definition of SDVal list . . 9.1.2.2 Constructors of SDVal list . . . . 9.1.2.3 Projectors of SDVal list . . . . . 9.1.2.4 Predicate Function of SDVal list 7
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
140 140 141 141 141 141 141 141 142 142 142 142 142 142 142 142 143 143 143 143 143 144 144 144 144 144 144 145 145 145 145 145
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
146 147 147 147 147 148 148 148 148 148 149
9.2
9.3
9.4
9.5
Environments . . . . . . . . . . . . . . . . . . . . . 9.2.1 Denotable Value . . . . . . . . . . . . . . . 9.2.1.1 Type Definition of SDDval . . . . 9.2.1.2 Constructors of SDDval . . . . . . 9.2.1.3 Predicate Function of SDDval . . . 9.2.2 Environment . . . . . . . . . . . . . . . . . 9.2.2.1 Type Definition of Env . . . . . . 9.2.2.2 Constructors of Env . . . . . . . . 9.2.2.3 Predicate Functions of Env . . . . Semantics of Expressions . . . . . . . . . . . . . . . 9.3.1 Semantics of Uninary Operator . . . . . . . 9.3.1.1 Semantics of Logical Negation . . 9.3.1.2 Semantics of Arithmetic Negation 9.3.1.3 Semantics of Uninary Operators . 9.3.2 Semantics of Binary Operator . . . . . . . . 9.3.2.1 Semantics of + . . . . . . . . . . . 9.3.2.2 Semantics of - . . . . . . . . . . . 9.3.2.3 Semantics of * . . . . . . . . . . . 9.3.2.4 Semantics of / . . . . . . . . . . . 9.3.2.5 Semantics of mod . . . . . . . . . . 9.3.2.6 Semantics of = . . . . . . . . . . . 9.3.2.7 Semantics of . . . . . . . . . . 9.3.2.8 Semantics of < . . . . . . . . . . . 9.3.2.9 Semantics of . . . . . . . . . . . 9.3.2.11 Semantics of >= . . . . . . . . . . 9.3.2.12 Semantics of and . . . . . . . . . . 9.3.2.13 Semantics of or . . . . . . . . . . 9.3.2.14 Semantics of Binary Operators . . 9.3.3 Semantics of PRE Operator . . . . . . . . . 9.3.4 Semantics of INIT Operator . . . . . . . . . 9.3.5 Semantics of WHEN Operator . . . . . . . . . 9.3.6 Semantics of CURRENT Operator . . . . . . . 9.3.7 Semantics of IF Operator . . . . . . . . . . 9.3.8 Semantics of Boolean Expressions . . . . . 9.3.9 Semantics of Integer Expressions . . . . . . 9.3.10 Semantics of Project Operator . . . . . . . 9.3.11 Semantics of Expressions . . . . . . . . . . Semantics of Equations . . . . . . . . . . . . . . . 9.4.1 The Utility Functions . . . . . . . . . . . . 9.4.2 Semantic Function of Equation . . . . . . . 9.4.3 Semantic Function of Eq list . . . . . . . . Semantics of Nodes . . . . . . . . . . . . . . . . . . 9.5.1 Structured Programming in LUSTRE . . . 8
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
149 149 149 149 149 149 149 150 151 151 151 151 152 152 153 153 153 154 154 155 155 156 157 157 158 158 159 159 160 162 163 164 165 166 166 167 167 168 171 171 172 172 172 172
9.5.2 9.5.3 9.5.4 9.5.5
IV
Parameter Elaboration Functions . Semantics of Local Declarations . . Semantics of Output Declarations Semantics of Nodes . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
Implementation of SCADE/LUSTRE
10 An Imperative Programming Language ToyC 10.1 From SCADE/LUSTRE to Implementation - Program Transformation Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2 Syntax of ToyC . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1 Declaration . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.1.1 Type Definition of Type Elem . . . . . . . . . . . 10.2.1.2 Constructors of Type Elem . . . . . . . . . . . . 10.2.1.3 Predicate Functions of Type Elem . . . . . . . . 10.2.1.4 Type Definition of Declaration . . . . . . . . . 10.2.1.5 Constructors of Declaration . . . . . . . . . . . 10.2.1.6 Predicate Functions of Declaration . . . . . . . 10.2.1.7 Projectors of Declaration . . . . . . . . . . . . 10.2.1.8 Induction Rule of Declaration . . . . . . . . . 10.2.2 Binary Operators . . . . . . . . . . . . . . . . . . . . . . . 10.2.2.1 Type Definition of Binop . . . . . . . . . . . . . 10.2.2.2 Constructors of Binop . . . . . . . . . . . . . . . 10.2.2.3 Predicate Functions of Binop . . . . . . . . . . . 10.2.3 Uninary Operators . . . . . . . . . . . . . . . . . . . . . . 10.2.3.1 Type Definition of Unop . . . . . . . . . . . . . . 10.2.3.2 Constructors of Unop . . . . . . . . . . . . . . . 10.2.3.3 Predicate Functions of Unop . . . . . . . . . . . 10.2.4 Expression . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4.1 Type Definition of TExpression . . . . . . . . . 10.2.4.2 Constructors of TExpression . . . . . . . . . . . 10.2.4.3 Predicate Functions of TExpression . . . . . . . 10.2.4.4 Projectors of TExpression . . . . . . . . . . . . 10.2.4.5 Induction Rule of TExpression . . . . . . . . . 10.2.5 Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.5.1 Type Definition of Statement . . . . . . . . . . 10.2.5.2 Constructors of Statement . . . . . . . . . . . . 10.2.5.3 Predicate Functions of Statement . . . . . . . . 10.2.5.4 Projectors of Statement . . . . . . . . . . . . . 10.2.5.5 Induction Rule of Statement . . . . . . . . . . . 10.2.6 Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.6.1 Type Definition of Program . . . . . . . . . . . . 10.2.6.2 Constructors of Program . . . . . . . . . . . . . 9
173 174 175 176
178 179 179 180 180 180 180 181 181 181 182 182 183 184 184 184 186 187 187 187 187 187 187 188 191 193 197 198 198 198 202 205 213 214 214 214
10.2.6.3 Projectors of Program . . . . . . . 10.2.6.4 Induction Rule of Program . . . . 10.3 Static Semantics of ToyC . . . . . . . . . . . . . . 10.3.1 Type Information . . . . . . . . . . . . . . . 10.3.1.1 Definition of Type Info . . . . . . 10.3.1.2 Constructors of Type Info . . . . 10.3.1.3 Predicate Functions of Type Info 10.3.1.4 Projectors of Type Info . . . . . . 10.3.2 Type Environment . . . . . . . . . . . . . . 10.3.2.1 Definition of TEnv . . . . . . . . . 10.3.2.2 Constructors of TEnv . . . . . . . 10.3.2.3 Predicate Functions of TEnv . . . 10.3.3 Static Semantics for Declaration . . . . . 10.3.4 Static Semantics for Expression . . . . . . 10.3.5 Static Semantics for Statement . . . . . . . 10.3.6 Static Semantics for Jump . . . . . . . . . . 10.3.7 Static Semantics for Program . . . . . . . . 10.4 Utilities for Dynamic Semantics . . . . . . . . . . . 10.4.1 Environment . . . . . . . . . . . . . . . . . 10.4.1.1 Type Definition of Env . . . . . . 10.4.1.2 Constructors of Env . . . . . . . . 10.4.1.3 Predicate Functions of Env . . . . 10.4.2 State . . . . . . . . . . . . . . . . . . . . . . 10.4.2.1 Type Definition of Value . . . . . 10.4.2.2 Constructors of Value . . . . . . . 10.4.2.3 Predicate Functions of Value . . . 10.4.2.4 Projectors of Value . . . . . . . . 10.4.2.5 Type Definition of Memory . . . . 10.4.2.6 Constructors of Memory . . . . . . 10.4.2.7 Projectors of Memory . . . . . . . . 10.4.2.8 Type Definition of Input . . . . . 10.4.2.9 Constructors of Input . . . . . . . 10.4.2.10 Projectors of Input . . . . . . . . 10.4.2.11 Type Definition of Output . . . . 10.4.2.12 Constructors of Output . . . . . . 10.4.2.13 Type Definition of State . . . . . 10.4.2.14 Constructors of State . . . . . . . 10.4.2.15 Projectors of State . . . . . . . . 10.4.2.16 Continuation . . . . . . . . . . . . 10.5 Dynamic Semantics of ToyC . . . . . . . . . . . . . 10.5.1 Operations for Binop and Unop . . . . . . . 10.5.2 Semantics of Declaration . . . . . . . . . . 10.5.3 Semantics for TExpression . . . . . . . . . 10.5.4 Semantics for Statement . . . . . . . . . . 10
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
214 215 215 215 215 215 216 216 216 216 217 217 217 218 220 222 223 224 224 224 224 225 226 226 226 226 227 227 228 228 229 229 229 229 229 229 230 230 231 231 231 231 232 234
10.5.5 Semantics for Jump . . . . . . . . . . . . . . . . . . . . . 236 10.5.6 Semantics for Toy Program . . . . . . . . . . . . . . . . . 237 11 The SCADE/LUSTRE Compiler - Static Analysis 11.1 Static Verification . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Definition Checking . . . . . . . . . . . . . . . . . . . . . . . . 11.2.1 Checking of Variable Definition in Equations . . . . . . 11.2.1.1 Checking of Variable Defined in Identifier List 11.2.1.2 Checking of Variable Defined in Equation . . . 11.2.1.3 Checking of Variable Defined in Equation List 11.2.2 Checking of Local Variable Declaration . . . . . . . . . 11.2.2.1 Checking of Local Declaration . . . . . . . . . 11.2.2.2 Checking of Local Declaration List . . . . . . . 11.2.3 Checking of Output Declaration . . . . . . . . . . . . . 11.2.3.1 Checking of Output Declaration . . . . . . . . 11.2.3.2 Checking of Output Declaration List . . . . . . 11.2.4 Checking of Node Definition . . . . . . . . . . . . . . . . 11.2.4.1 Checking of Node Definition . . . . . . . . . . 11.2.4.2 Checking of Node Definition List . . . . . . . . 11.3 Initialization Checking . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Initialization Checking of Value Stream . . . . . . . . . 11.3.2 Initialization Checking of Value Stream List . . . . . . . 11.3.3 Initialization Checking of Expression . . . . . . . . . . . 11.4 Clock Calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4.1 The Why of Clock Calculus . . . . . . . . . . . . . . . . 11.4.2 Semantic Equality . . . . . . . . . . . . . . . . . . . . . 11.4.2.1 Clock Checking of Two Value Streams . . . . . 11.4.2.2 Clock Checking of Value Stream List . . . . . 11.4.2.3 Clock Checking of Two Value Stream Lists . . 11.4.2.4 Basic Clock Checking of Two Value Streams . 11.4.2.5 Non Basic Clock Checking of Value Streams . 11.4.2.6 Clock Checking of Expression List . . . . . . . 11.4.2.7 Clock Checking of If-Expression . . . . . . . . 11.4.2.8 Clock Checking of If-Expression List . . . . . . 11.4.2.9 Clock Checking of Expression . . . . . . . . . . 11.5 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5.1 Substitution of Expression . . . . . . . . . . . . . . . . . 11.5.2 Substitution of Expression List . . . . . . . . . . . . . . 11.5.3 Substitution of Equation . . . . . . . . . . . . . . . . . . 11.5.4 Substitution of Equation List . . . . . . . . . . . . . . . 11.5.5 Substitution of Node Definition . . . . . . . . . . . . . . 11.6 Renaming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.6.1 Renaming of Local Declaration . . . . . . . . . . . . . . 11.6.1.1 Renaming of Local Declaration . . . . . . . . . 11
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
238 238 239 239 239 239 240 240 240 241 241 241 241 242 242 242 243 243 243 244 244 244 245 245 245 246 246 247 247 248 248 249 251 251 253 253 254 254 254 254 254
11.6.1.2 Renaming of Local Declaration List . . . . . . . 255 11.6.2 Translation from Local Declaration to Expression . . . . . 255 11.6.2.1 Translation from Local Declaration to Expression 255 11.6.2.2 Translation from Local Declaration List to Expression List . . . . . . . . . . . . . . . . . . . . 255 11.6.3 Renaming of Node Definition . . . . . . . . . . . . . . . . 256 11.7 Directed Acyclic Variable Graph and Topological Sorting . . . . 256 11.7.1 Directed Acyclic Variable Graph . . . . . . . . . . . . . . 256 11.7.1.1 Definition of PIdent . . . . . . . . . . . . . . . . 257 11.7.1.2 Checking of Occurrance of Identifier in Equations 257 11.7.1.3 Finding All of PIdent . . . . . . . . . . . . . . . 258 11.7.1.4 Greation of Graph of PIdent . . . . . . . . . . . 260 11.7.2 Topological Sorting . . . . . . . . . . . . . . . . . . . . . . 261 11.8 Checking of Recursive Node Call . . . . . . . . . . . . . . . . . . 263 11.8.1 Name Occurrence Checking . . . . . . . . . . . . . . . . . 263 11.8.1.1 Name Occurrence Checking in Expression . . . . 263 11.8.1.2 Name Occurrance Checking in Expression List . 264 11.8.1.3 Name Occurrance Checking in Equation . . . . . 265 11.8.1.4 Name Occurrance Checking in Equation List . . 265 11.8.2 Construction of Node Call Graph . . . . . . . . . . . . . . 265 11.8.2.1 Checking of Node Name Called by Node Definition265 11.8.2.2 Checking of Node Name Called by Node Definition List . . . . . . . . . . . . . . . . . . . . . . 266 11.8.2.3 Construction of Node Call Graph from a Node Definition . . . . . . . . . . . . . . . . . . . . . . 266 11.8.2.4 Construction of Node Call Graph from a Node Definition List . . . . . . . . . . . . . . . . . . . 266 11.8.2.5 Construction of Node Call Graph . . . . . . . . 267 11.9 Node Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 11.9.1 The Why of Node Expansion . . . . . . . . . . . . . . . . 267 11.9.2 Finding Node in Node Definition . . . . . . . . . . . . . . 268 11.9.3 Expansion of Expression . . . . . . . . . . . . . . . . . . . 268 11.9.4 Expansion of Expression List . . . . . . . . . . . . . . . . 277 11.9.5 Expansion of Equation . . . . . . . . . . . . . . . . . . . . 278 11.9.6 Expansion of Equation List . . . . . . . . . . . . . . . . . 279 11.9.7 Expansion of Node . . . . . . . . . . . . . . . . . . . . . . 280 11.10Flating Node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 11.10.1 Flating Expression . . . . . . . . . . . . . . . . . . . . . . 281 11.10.1.1 Flating Binary Expression . . . . . . . . . . . . 281 11.10.1.2 Flating Uninary Expression . . . . . . . . . . . . 282 11.10.1.3 Flating Init-Expression . . . . . . . . . . . . . . 282 11.10.1.4 Flating Pre-Expression . . . . . . . . . . . . . . 283 11.10.1.5 Flating Current-Expression . . . . . . . . . . . . 283 11.10.1.6 Flating When-Expression . . . . . . . . . . . . . 283 12
11.10.1.7 Flating If-Expression 11.10.1.8 Flating Expression . . 11.10.2 Flating Equation . . . . . . . . 11.10.3 Flating Equation List . . . . . 11.10.4 Flating Node Definition . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
284 285 286 287 287
12 The 12.1 12.2 12.3
SCADE/LUSTRE Compiler - Single-Loop Code 288 The Completeness, Soundness and Correctness of Translation . . 288 The Basic Approach . . . . . . . . . . . . . . . . . . . . . . . . . 290 Translation of Equations . . . . . . . . . . . . . . . . . . . . . . . 292 12.3.1 Generation of Expressions . . . . . . . . . . . . . . . . . . 292 12.3.1.1 Translation of Variables . . . . . . . . . . . . . . 292 12.3.1.2 Translation of Expressions . . . . . . . . . . . . 293 12.3.2 Generation of Statements . . . . . . . . . . . . . . . . . . 298 12.4 Translation of Nodes . . . . . . . . . . . . . . . . . . . . . . . . . 302 12.4.1 Computation Order and Single Loop . . . . . . . . . . . . 302 12.4.2 Generation of Declarations . . . . . . . . . . . . . . . . . 303 12.4.2.1 Generation of Input Declaration . . . . . . . . . 303 12.4.2.2 Generation of Output Declaration . . . . . . . . 303 12.4.2.3 Generation of Local Declaration . . . . . . . . . 304 12.4.2.4 Generation of Declaration for Auxiliary Variables 304 12.4.3 Generation of Programs . . . . . . . . . . . . . . . . . . . 309 12.5 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
13 The SCADE/LUSTRE Compiler - Automaton-Like Code 13.1 The Basic Approach . . . . . . . . . . . . . . . . . . . . . . . . . 13.2 Static State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.1 Type Definition of DeltaState . . . . . . . . . . . . . . . 13.2.2 Constructors of DeltaState . . . . . . . . . . . . . . . . . 13.2.3 Other Utilities of DeltaState . . . . . . . . . . . . . . . . 13.2.4 Predicate Functions of DeltaState . . . . . . . . . . . . . 13.3 State and State Code in Automata . . . . . . . . . . . . . . . . . 13.3.1 Definition of State in Automata . . . . . . . . . . . . . . . 13.3.2 Generation of Automata from Static State . . . . . . . . . 13.3.3 Erasing Redundant State in Automata . . . . . . . . . . . 13.3.3.1 Erasing Assignment of init with NIL . . . . . . 13.3.3.2 Erasing Assignment of init with F after T . . . 13.3.3.3 Erasing Assignment of pre with non NIL value after NIL . . . . . . . . . . . . . . . . . . . . . . 13.4 Construction of Automata . . . . . . . . . . . . . . . . . . . . . . 13.4.1 Value for Partial Evaluation . . . . . . . . . . . . . . . . . 13.4.1.1 Type Definition of PEVal . . . . . . . . . . . . . 13.4.1.2 Constructors of PEVal . . . . . . . . . . . . . . . 13.4.1.3 Predicate Functions of PEVal . . . . . . . . . . . 13
311 311 314 314 314 314 316 317 317 317 319 321 322 323 324 324 325 325 325
13.4.1.4 Projectors of PEVal . . . . . . . . . . . . . 13.4.2 Partial Evaluation . . . . . . . . . . . . . . . . . . . 13.4.2.1 Partial Evaluation of Operations . . . . . . 13.4.2.2 Partial Evaluation for TExpression . . . . 13.4.2.3 Partial Evaluation for Statement . . . . . 13.4.3 Generation of Automata from Initial State and Code 13.4.4 Node Reachability Checking . . . . . . . . . . . . . . 13.5 Generation of Automata-Like Code . . . . . . . . . . . . . . 13.5.1 Creation of DeltaState . . . . . . . . . . . . . . . . 13.5.2 From Automata to ToyC Statements . . . . . . . . . 13.5.3 Generation of ToyC Program . . . . . . . . . . . . .
V
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
Epilogue
325 325 325 326 328 330 335 337 337 337 338
339
14 Related Works 14.1 The Synchronous Approach . . . . . . . . . . . . . . . . . . . . . 14.1.1 Fundamentals of Synchrony . . . . . . . . . . . . . . . . . 14.1.2 Synchrony and Concurrency . . . . . . . . . . . . . . . . . 14.2 Synchronous Programming Language ESTEREL . . . . . . . . . 14.3 Dataflow Language - Lucid . . . . . . . . . . . . . . . . . . . . . 14.4 Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.5 Highlights of the Last Fifteen Years . . . . . . . . . . . . . . . . 14.5.1 Getting Tools to Market . . . . . . . . . . . . . . . . . . . 14.5.2 The Cooperation with Airbus and Schneider Electric . . . 14.5.3 The Cooperation with Dassault Aviation . . . . . . . . . 14.5.4 The Cooperation with Snecma . . . . . . . . . . . . . . . 14.5.5 The Cooperation with Texas Instruments . . . . . . . . . 14.6 Some New Technology . . . . . . . . . . . . . . . . . . . . . . . . 14.6.1 Handling Arrays . . . . . . . . . . . . . . . . . . . . . . . 14.6.2 New Techniques for Compiling ESTEREL . . . . . . . . . 14.6.3 Observers for Verification and Testing . . . . . . . . . . . 14.7 Major Lessons . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.7.1 The Strength of the Mathematical Model . . . . . . . . . 14.7.2 Compilation Has Been Surprisingly Difficult but Was Worth the Effort . . . . . . . . . . . . . . . . . . . . . . . . 14.8 Future Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . 14.8.1 Architecture Modeling . . . . . . . . . . . . . . . . . . . . 14.8.2 Beyond Synchrony . . . . . . . . . . . . . . . . . . . . . . 14.8.3 Real-Time and Logical Time . . . . . . . . . . . . . . . . 14.8.4 From Programs to Components and Systems . . . . . . . 14.8.4.1 What are the Proper Notions of Abstraction, Interface and Implementation, for Synchronous Components? . . . . . . . . . . . . . . . . . . . . 14
340 340 340 341 341 346 346 348 349 349 350 350 351 351 351 352 355 356 356 357 358 358 361 361 362
363
14.8.4.2 Multi-Faceted Notations `a la UML, and Related Work . . . . . . . . . . . . . . . . . . . . . . . . 366 14.8.4.3 Dynamic Instantiation, Some Hints . . . . . . . 366 15 Conclusions 15.1 A Language for Concurrent and Real-Time Computing . . 15.2 Internet Programming . . . . . . . . . . . . . . . . . . . . 15.3 Synchronous Dataflow Language Design Issues . . . . . . 15.3.1 LUCID Synchrone . . . . . . . . . . . . . . . . . . 15.3.2 New Features for Synchronous Dataflow Languages 15.4 Synchronous Dataflow Language Implementation Issues . 15.4.1 Compiling Technique for Distributed Architectures 15.4.2 Intra-Instance Scheduling . . . . . . . . . . . . . . 15.4.3 Non-Synchronous Systems . . . . . . . . . . . . . .
15
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
368 368 368 369 369 369 370 370 370 370
List of Figures 2.1 2.2 2.3
Type Inference Rules for PowerEpsilon . . . . . . . . . . . . . . More on Type Inference Rules for PowerEpsilon . . . . . . . . Type Conversion Rules for PowerEpsilon . . . . . . . . . . . .
16
25 26 28
List of Tables 2.1
Interpretation of Logical Connectives . . . . . . . . . . . . . . . .
6.1
Interlace of Dataflow . . . . . . . . . . . . . . . . . . . . . . . . . 101
7.1
Nodes and Clocks . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
8.1
Boolean Flows and Clocks . . . . . . . . . . . . . . . . . . . . . . 113
9.1
Sampling and Interpolating . . . . . . . . . . . . . . . . . . . . . 165
13.1 13.2 13.3 13.4 13.5
Variable Assignment 1 Variable Assignment 2 Variable Assignment 3 Variable Assignment 4 Structure of Automata
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
14
320 320 320 321 337
14.1 Some Basic ESTEREL Statemens . . . . . . . . . . . . . . . . . 344 14.2 Signal Operators and Their Meaning . . . . . . . . . . . . . . . 348
1
Part I
An Introduction to PowerEpsilon
2
Chapter 1
Introduction The purpose of this book is to give an introduction to the dataflow programming in terms of a computer language and mathematical theorem proof development system PowerEpsilon. The study of these system encompasses a major portion of dataflow programming. Thus, in a sense our subject matter is old. However, the formal development which we have adopted here is comparatively new. A beginner may find our account at times uncomfortably abstract since we have tied ourselves up to a computer system. In order to read this book, we first have to try getting familiar with PowerEpsilon - a language with many features in common with LISP. However, you will find that PowerEpsilon is very easy to learn. Even since the birth of the first modern computer, computer technology has developed at a fantastic speed. Today we see computers being used not only to solve computationally difficult problems, such as carrying out a fast Fourier transform or inverting a matrix of high dimensionality, but also being asked to perform tasks that would be called intelligent if done by human beings. Some such tasks are writing programs, answering questions, and proving mathematical theorems.
1.1
What is a Proof ?
The Oxford Advanced Learner’s Dictionary of Current English contains the following sentence concerning the term proof: 1. evidence (in general), or a particular piece of evidence, that is sufficient to show, or help to show, that something is a fact. 2. demonstrating, testing of whether something is true, a fact, etc. 3. test, trial, examination. ... The Webster’s New Collegiate Dictionary contains the following sentence concerning the term proof: 3
Introduction
4
1. That degree of cogency, arising from evidence, which convinces the mind of any truth or fact and produces belief; also, that which proves or tends to prove. Properly speaking, proof is the effect or result of evidence; evidence is the medium of proof. 2. a Any effort, process, or operation designed to establish or discover a fact or truth; test; trial. b A test applied to substances to determine if they are of satisfactory quality, etc. 3. Quality or state of having been proved or tried; as armor of proof. 4. Proof strength, that is, the minimum strength of proof spirit; sometimes, short for PROOF SPIRIT. Also, strength with reference to the standard for proof spirit. 5. Engineering and Etching. A Proof impression. 6. Law. Evidence operating to determine the judgment of a tribunal. 7. Math. An operation for testing the accuracy of a previous operation; a check. 8. Photo. A test print made from a negative. 9. Print. A trial impression, as from type, taken for correcting or examination; – called also proof sheet. Conjecture and proof are the twin pillars of mathematics. Conjecture stands at the cutting edge, in the wild region separating the known from the unknown. Proof stands at the center of mathematics, a solid tower about which an elaborate scaffolding of mathematical theorems is built. The task of mathematicians is to develop conjectures – guesses or hypotheses about mathematical behavior – which they can then attempt to prove. On this basis, mathematics grows, stretching into new fields and revealing hitherto unseen connections between well-established domains. Like other sciences, mathematics has an experimental component. In the trial-and-error process of developing and proving conjectures, mathematicians collect data and look for patterns and trends. They construct novel forms, seek logical arguments to strengthen a case, and search for counterexamples to destroy an argument or expose an error. In these and other ways, mathematicians act as experimentalists, constantly testing their ideas and methods. The concept of proof, however, brings something to mathematics that is missing from other sciences. Once the experimental work is done, mathematicians have ways to build a logical argument that pins the label “true” or “false” on practically any conjecture. Physicists can get away with overwhelming evidence to support a theory. In mathematics, a single counterexample is enough to sink a beautiful conjecture. For this reason, too, no set of pretty pictures, collected during countless computer experiments, is complete enough to substitute for a mathematical proof. The word “proof” has been frequently used by mathematicians in their daily work. • A proof is an evidence to show something is true under certain assumptions.
Introduction
5
• In terms of PowerEpsilon, a proof is a program whose type being corresponding to the theorem to be proved.
1.2
What is a Correct and Perfect Proof ?
A perfect proof means that the proof is precisely and completely stated without missing any arguments.
1.3
Mathematics and Mathematical Logic
A theory of deduction utilizes various ideas of logic that may appear strange, even foreign, to mathematics students with little background in logic. The main concern of this book is to develop the important theory of deduction known as the predicate calculus. In an effort to overcome the strangeness of the logical ideas and methods involved, we shall first present the theory of deduction based on the connectives “not” and “or”. This theory, known as the propositional calculus, characterizes the conclusions, or consequences, of a given set of assumptions, and so provides us with the formal side of arguments. The question of the validity of a given argument of this sort is easy to solve by the truth-table method, and so is really trivial. Therefore, in studying the accompanying theory of deduction we are able to concentrate on the formal apparatus and methods of a theory of deduction, without the complications owing to the subject matter under investigation. In short, the propositional calculus is a convenient device for making clear the nature of a theory of deduction. Mathematical logic is the study of logic as a mathematical theory. Following the usual procedure of applied mathematics, we construct a mathematical model of the system to be studied, and then conduct what is essentially a pure mathematical investigation of the properties of our model. In rough outline, the steps in setting up a theory of deduction are as follows: 1. First, the propositions (statements) of a language are characterized. This is achieved by actually creating a specific formal language possessing its own alphabet and rules of grammar; in fact, the sentences of the formal language are effectively spelled out by suitably chosen rules of grammar. 2. Finally, the notion of truth within this specialized and highly artificial language is characterized in terms of the concept of a “proof”.
1.3.1
Mathematical Theorem Proving as Computer Programming
All the characteristics of computer programming apply to mathematical theorem proving such as modularity and reusability. A proof is more efficient if it can
Introduction
6
run faster than others. More technical terminology will be used in mathematical theorem proving such as debugging a mathematical theorem proof or running a mathematical theorem proof. A proof can be executed distributively and concurrently. Finding Good Presentation of Problems Choosing a good presentation is a key step towards a successful solution of the problem in computer programming. It will be the same in mathematics. A good presentation of a mathematical structure means that the problem is half-solved. For example, a real number can be represented as a Dedekind cut or a Cauchy sequence. Both have their advantages and disadvantages. The Dedekind cut construction of real numbers has an initial advantage of simplicity, in that it provides a simple definition of real numbers and its ordering. But multiplication of Dedekind cuts is awkward, and verification of the properties of multiplication is a tedious business. The Cauchy sequence construction of real numbers also has the advantage of generality, since it can be used with an arbitrary metric space in place of rational numbers. Therefore, the two definitions are often used for different purposes. Information Hiding Principle In computer programming practice, we often try to separate program specifications from its implementations. This will be able to isolate the implementation details from the input/output behaviours of the programs. On the other hand, with its abstract nature, a mathematical theorem (specification) and its proof (implementation) are always separated from each other. Therefore, a good computer program always captured some elegant mathematical properties. Stepwise Refinement ‘Structured programming’ is the formulation of programs as hierarchical, nested structures of statements and objects of computation. Structured programming is considered by many programmers as synonym for modular or top-down programming. Structured programming can be visualized as the application of basic problem decomposition methods to establish a manageable, hierarchical problem structure. These problem decomposition methods are common to all engineering disciplines as well as the physical, chemical, and biological sciences. Breaks the problem’s solutions down into progressively more detailed levels (decomposition). In computer programming we often break a problem being solved down into several small subproblems so that the solutions to the subproblems can be composed into the solution to the original problem. It is similar in mathematical theorem proving practices, mathematicians often break a proof of a theorem to be proved into several small lemmas, the theorem will be proved once these lemmas being proved.
Introduction
1.4
7
Formal Reasoning of Parallel and Real-Time Systems
Formal reasoning the correctness of a software system is difficult. Formal reasoning the correctness of a parallel or a real-time software system is even more difficult. • First, we need a formal language which is capable of modeling the parallel and real-time systems. • Second, we need a framework in which the formal reasoning of the parallel and real-time systems becomes possible. PowerEpsilon is a language and a framework satisfying the two criteria. Currently most real-time applications were developed in a very traditional way. The developers started from hardware design and in the same time were working on writing device drivers. The integration and testing was done using in-circuit-emulator (ICE). Then the coding of application programs were started line by line in C/C++ or even assembly language. The most frequently used tools are compilers and debuggers. Sometimes, the ICE should be used again since the developers can not make sure whether an error was caused by a hardware design fault or just a software bug. Therefore developers were spending most of their time on debugging a wrong program instead of writing new programs.
1.5
Long Live Mathematics
Computer programming and mathematical theorem proving are equally difficult or equally easy. If we say that writing a piece of program is as same as proving a mathematical theorem, then designing a digital system will be as same as constructing a mathematical theory. They are isomorphic to each other.
1.6
About This Book
This book presents a practical work of formally developing synchronous dataflow model in framework of constructive type theory supported by a mathematical theorem proof development system PowerEpsilon. It has actually nothing new, but a re-interpretation of dataflow computing model using PowerEpsilon. It has been intended to give a close look at the relationship between mathematics and computer science. It is rather technical. It looks more like a computer programming text book rather than a mathematical one. You should not, as the first step, try to read this book. There are no computer programmers like to read a program unless he is going to write a similar
Introduction
8
program. You do not need to read the proofs given in the book which is written completely in terms of PowerEpsilon unless you need to write a similar proof or you are going to modify them. Instead, you should execute the proofs in the PowerEpsilon environment. Don’t be too upset when you find it is quite difficult to write down a proof in PowerEpsilon. Think about how difficult it is when you write down a C or Pascal. You will be getting familiar with the language, the system and the mathematical theorem proving style very soon. Programming is fun because it’s challenging to make the best design decisions. One especially neat feature of programming is that it’s very clear when you do it well. There is a big universe out there; a hundred years from now the people won’t know what the programs we have written today were. However, the mathematics will be there still. I believe very strongly in using what I have written. Note that I am the best user of my own programs, since I use them constantly so I am aware of their shortcomings and can change them if I find something stupid. A program, you have created, is useful if it is useful for yourself first. We don’t intend to cover all of aspects of dataflow programming. Instead, we are going to provide a outline of development of dataflow model in PowerEpsilon. I did not try to put all of the theorems and proofs into the book since this will destroy the whole structure of the book, instead, we only list which we think it is most important. This book is divided into five parts: • Part I – An Introduction to PowerEpsilon contains an overview of the basic features of PowerEpsilon which is organized as follows: Chapter 1. Introduction provides a brief introduction of PowerEpsilon system. Chapter 2. Typed λ-Calculus describes the fundamental concepts of typed λ-calculus and the lexicon, syntax and semantics of PowerEpsilon. • Part II – Modeling Dataflow in PowerEpsilon specifies a general model of dataflow in terms of PowerEpsilon, it is organized as follows: Chapter 3. Synchronous Dataflow Model discuss the concepts and frameworks of dataflow models. Chapter 4. Modeling of Dataflow Programming in PowerEpsilon provides a general framework of dataflow models in terms of PowerEpsilon. Chapter 5. Modeling of Synchronous Dataflow Programming in PowerEpsilon provides a general framework of synchronous dataflow models in terms of PowerEpsilon.
Introduction
9
Chapter 6. Programming in Dataflow Languages illustrates a dataflow programming styles in a number of examples. • Part III – Semantics of Dataflow Languages gives a syntactic and semantic definition of a specific dataflow programming language and system SCADE/LUSTRE, it is organized as follows: Chapter 7. Synchronous Dataflow Language SCADE/LUSTRE gives a simple introduction of a synchronous dataflow language and system SCADE/LUSTRE. Chapter 8. Abstract Syntax of SCADE/LUSTRE presents abstract syntax of SCADE/LUSTRE in terms of PowerEpsilon. Chapter 9. Semantics of SCADE/LUSTRE presents formal semantics of SCADE/LUSTRE in terms of PowerEpsilon. • Part IV – Implementation of SCADE/LUSTRE describes several implementation techniques of SCADE/LUSTRE, it is organized as follows: Chapter 10. An Imperative Programming Language ToyC contains a complete definition of an imperative programming language ToyC include its abstract syntactic, static and dynamic semantic specifications. The language will be used in the rest of report as a target language for the implementation of SCADE/LUSTRE. Chapter 11. The SCADE/LUSTRE Compiler - Static Analysis presents the static well-formedness checking of SCADE/LUSTRE. Chapter 12. The SCADE/LUSTRE Compiler - Single-Loop Code gives a simple way to translate a given LUSTRE program into a piece of semantically equivalent single-loop code in ToyC. Chapter 13. The SCADE/LUSTRE Compiler - Automaton-Like Code presents a method to translate a given LUSTRE program into a piece of semantically equivalent automata-like code. • Part V – Epilogue summarizes the book by discussing some related works and followed by a conclusion, it is organized as follows: Chapter 14. Related Works investigates and compares many other synchronous languages such as Signal and ESTEREL which are built-up on a common mathematical framework that combines synchrony (i.e., time advances in lock step with one or more clocks) with deterministic concurrency. Chapter 15. Conclusions contains the finally the conclusions which discusses some general synchronous dataflow language design issues and mentions some areas for future research.
Chapter 2
Type Theory and PowerEpsilon 2.1
Type Theory
2.1.1
Types in Computer Science
In the Webster’s New Collegiate Dictionary, the following sentences were found concerning the word “type”: . . . a: qualities common to a number of individuals that distinguish them as an identifiable class: as (1): the morphological, physiological, or ecological characters by which relationship between organisms may be recognized (2): the form common to all instances of a word b: a typical and often superior specimen c: a member of an indicated class or variety of people d: a particular kind, class, or group: as (1): a texonomic category essentially equivalent to a division or phylum (2): a group distinguishable on physiologic or serological bases (3): one of a hierarchy of mutually exclusive classes in logic suggested to avoid paradoxes e: something distinguishable as a variety. . . . Type is one of most important concepts in the theory of programming languages. The research progress of the type theory has a profound impact on the development of computer science during the past two decades. Wegner [33] has given a detailed discussion on this question which is quoted as follows: This is a difficult question to answer because a complete answer must characterize aggregate properties of the type system as a whole as well as properties of individual types. We were tempted to replace the question by “What is a
10
Type Theory
11
type system?”. Instead of such rephrasing we choose to interpret the original question as an abbreviation for the longer question: “What properties must types have both individually and collectively to constitute an acceptable type system for an object-oriented language?”. The following collection of partial answers illustrates the variety of levels at which an answer may be formulated. 1. Applicative programmer’s view: Types partition values into equivalent classes with common attributes and operations. Polymorphism allows type to have overlapping value sets so the partition (equivalence relation) becomes a covering (compatibility relation) of the universal value set. 2. System evolution (object-oriented) view: Types are behavior specifications (predicates) that may be composed and incrementally modified to form new behavior specifications. Inheritance is an important mechanism for incrementally modifying behavior that supports system evolution. 3. Type checking view: Types impose syntactic constraints on expressions so that operators and operands of composite expressions are compatible. A type system is a set of rules for associating a type with every semantically meaningful subexpression of a programming language. 4. Verification view: Types determine behavior invariants that instances of the type are required to satisfy. 5. System programming and security view: Types are a suit of clothes (armor) that protects raw information (bit strings) from unintended interpretations. 6. Implementer’s view: Types specify a storage mapping for values.
2.1.2
Intuitionistic Logic
In mathematical logic, formal languages in which we can express substantial parts of mathematics have long history, going back to Frege’s formulation in 1872 of the predicate logic. So, if we want a programming language in which it should be possible not just to write down programs but also to reason about them, an obvious attempt is to see whether any of the formalization used for mathematics could also be used for programming. 2.1.2.1
Nonconstructive Mathematics
Today, the standard formalization of classical, i.e. nonconstructive mathematics is the axiomatization of set theory given by Zermelo in 1908 (and later extended by Fraenkel). However, ZF is not intended to be a language in which we are supposed to actually do mathematics in, but rather a language in which we
Type Theory
12
exhibit the basic principles of set theory. For instance, in ZF there are only two nonlogical symbols, = and ∈, expressing equality and membership respectively. So in ZF we can not write any program at all. Even if we extend ZF to a syntactically richer language, there remains the problem of how to represent programs; the notion of function can not be used because functions in classical set theory are in general not computable. 2.1.2.2
Constructive Mathematics
Out of the foundational crisis of mathematics in the first decades of this century, constructive mathematics arose as an independent branch of mathematics, mainly developed by Brouwer under the name intuitionism. Constructive mathematics did not get much support because of the general belief that important parts of mathematics were impossible to develop constructively. By the work of Bishop, however, this belief has been shown to be totally wrong. In the book “Foundations of Constructive Analysis”, Bishop rebuilds constructively central parts of classical analysis; and he does it in a way that demonstrates that constructive mathematics can be as simple and elegant as classical mathematics. Bishop has also envisaged the possibility of using a formalization of constructive mathematics for programming, starting from G¨odel’s theory of computable functions of finite type. Constable has also used constructive mathematics as a foundation of programming. 2.1.2.3
Deductions as Computations and Computations as Deductions
The following are some general examples of “deductions as computations”: • A first example of “deduction as computation” is backward chaining. An algorithm for searching for a deduction of A from B1 , ..., Bn may start with a desired conclusion A, and then apply all rules of deduction in reverse systematically at intermediate stages to see if each statement could arise as intermediate conclusion from intermediate premises. The algorithm then terminates when A has been traced back through various stages to the assumptions B1 , ..., Bn , and does not terminate if it never succeeds in constructing a deduction of A from B1 , ..., Bn . A typical example of this kind of systems is logic programming language Prolog. • A second example of “deduction as computation” is forward chaining. An algorithm for searching a deduction of A from B1 , . . . , Bn may start with B1 , . . . , Bn as data, applying all rules of deduction systematically at intermediate stages to intermediate premises to obtain intermediate conclusions. The program then terminates when A is obtained as a conclusion, and therefore when a deduction of A from B1 , . . . , Bn has been obtained.
Type Theory
13
An idealization of classical expert system language OPS5 is an example of this kind of systems. The following are some general examples of “computation as deduction”: • Think of a computation as proceedings in stages, and as having a unique state at each stage and a unique input history. Each successive state is one of the possible successor of the previous state of the processor and input history. The possible finite sequences of states compatible with machine operation rules are the possible finite execution sequences. A sequential machine is one with only one execution sequence. A translation from execution sequences to deductions can be described as follows: given the input history H and the current state S and one possible successor state S ′ (there may be many), associate one rule of inference I(H, S, S ′ ) with H and S as premises, and S ′ as conclusion. This is an operational view of computations. There are many other view on computation, and each admits a corresponding deduction system. From another point of view, each execution sequence is a model of a theory, rather than a deduction in the theory. • Every system in which program correctness (i.e., that programs meet their specification) can be proved can be constructed as a logical system. There is an enormous quantity of ongoing work concerning sequential programs on program specification, program development, and program verification. From intuitionistic point of view, proposition and judgment are different concepts. A proposition is a description of certain fact, we can use logical operators such as ⇒, ∧, ∨, ¬, ∀ and ∃ to construct complex proposition from simple ones. On other hand, when we say that a proposition hold, we have make a judgment. For example, if A stands for “points x and y are on a line”, then A is only a proposition. But if we say that A is true, that is, we have a proof to show that points x and y are on a line, then we have made a judgment. In classical logic, a proposition is a Boolean-valued function with value of true or false. Intuitionistic logic, however, a proposition is determined by the contents of its proof. If we can give a proof of this proposition, then the proposition holds. Therefore, the provability is the true value of the proposition. The meaning of logical connectives and the intuitionistic interpretation of propositions can be related as Table 2.1: That is, the proposition F alse has no proof. The proof of proposition A ∧ B consists of a proof of A and a proof of B. The proof of proposition A ∨ B consists of a proof of A or a proof of B. The proof of proposition A ⇒ B is a method, given a proof of A, a proof of B can be obtained. The proof of proposition ∀(x)B(x) is a method too, given any proof a for A, a proof of B(x) can be obtained. The proof of proposition ∃(x)B(x) consists of a proof a of A and a proof of B(a).
Type Theory
14
P roposition F alse A∧B A∨B A⇒B ∀(x)B(x) ∃(x)B(x)
Proof of Proposition no a proof of A and a proof of B a proof of A or a proof of B giving a proof of A can produce a proof of B giving any objects can produce a proof of B(a) there exists an object and a proof of B(a)
Table 2.1: Interpretation of Logical Connectives
2.1.2.4
Mechanization of Higher-Order Logic
Mechanization of higher-order logic is an important problem. It was pointed out that there are many problems which can be conveniently and compactly represented by higher than first-order logic [28].
2.1.3
Martin-L¨ of ’s Type Theory
Martin-L¨of’s type theory [21, 22, 23] has been developed with the aim of being a formalization of constructive mathematics. Its rules are formulated in the style of Gentzen’s natural deduction system for predicate logic; a formal system which Gentzen set up in 1936 with the intention that it should be as close as possible to actual reasoning. Martin-L¨of has suggested that type theory also could be viewed as a programming language. As such it is a typed functional language without assignments or other imperative features. Compared with other programming languages, it has a very rich type structure in that a type of a program can be used to describe what the program should do without describing how the program performs its task. The language differs from other programming languages in that all programs terminate. General recursion is not available, but primitive recursion together with functions of higher-order makes it possible to express all provably total recursive functions. In order to express the task of the program in the type system, it is necessary to be able to express properties of programs. Properties can be expressed as propositions (formulae), using predicate logic. In recent years, a number of new type theories has been proposed such as the Calculus of Constructions (CC) by Coquand and Huet [9]. These theories make it possible for us to formally treat some “higher-order” concepts such as “proposition”, “axiom”, “rule”, “transformation”, “problem”, and “proof strategy” etc. in one uniform framework. Normally, a software development environment only describe a special kind of “proposition” or “rule”, but can not treat them in a more abstract way. Based on these theories, a number of new approaches for software development have been proposed [8].
Type Theory
15
In this report we describe the core of a strongly-typed polymorphic functional programming language called PowerEpsilon. Our original motivation came from the problem of finding a strongly-typed language suitable for use as a metalanguage for manipulating programs, proofs, and other similar symbolic data. PowerEpsilon can be used as both a programming language with a very rich set of data structures and a system for formalizing constructive mathematics. In type theory, the concept of types are more basic than sets. We say that A is a type, if we know what is its objects and whether two objects of A are equal. Moreover the equality defined for the objects of a given type A is the equivalent relation, and the equivalent relations are decidable.
2.1.4
Natural Deduction Systems
2.1.4.1
Deduction System
A deduction system for the predicate calculus consists of a (possibly infinite) recursive set of valid well-formed formulas (wffs), called axioms, and a finite set of mappings, called rules of inferences; each rule of inference maps one or more valid wffs into a valid wff. A wff B is said to be deducible (provable) in such a system if there exists a finite sequence of wffs such that B is the last wff in the sequence and each wff in the sequence is either an axiom or derived from previous wffs in the sequence by one of the rules of inference. Such a sequence of wffs is called a proof of B. Thus, since each axiom is a valid wff and each rule of inference maps valid wffs into a valid wff, it follows that every deducible wff is valid. 2.1.4.2
Natural Deduction System
A natural deduction system is a deduction system invented by Gentzen in which the set of axioms and inference rules is rich enough to enable valid wffs to be deduced in a very “natural” way. We now use the Greek letter Γ to indicate any sequence of zero or more wffs and use notation Γ ⊢ B standing for wff B is the consequence of Γ.
2.1.5
Should Types be Modeled by Calculi or Algebra?
Wegner [33] has also discussed this question. His discussion can be summarized as the following: • A calculus is a syntactic system of transformation rules. A “calculus” may be a formal system such as the predicate calculus where computation consists of applying rules of inference to prove theorems. It may be a reduction system such as the λ-calculus where computation consists of reducing expressions to a normal form in which no further reductions are applicable. The notion of calculus that emerges from these examples
Type Theory
16
is that of a syntactically defined set of computation or inference rules are applicable (in reduction systems). • An algebra is a semantic system whose behavior can be realized by a variety of syntactic calculi. An algebra is semantic system whose behavior can be realized by a variety of syntactic calculi. For example the algebra of integers can be realized by decimal or binary number systems or even by λ-calculus representation. An algebra may be thought of as an equivalence class of calculi with common behavior or as an abstraction from a specific syntactic realization of a calculus to a semantic specification of an underlying behavior. • Specifying a computation as an algebra or a calculus. In some contexts a choice can be made on between specifying a computation as an algebra or a calculus. • Relation between calculi and algebra. Calculi are concrete (syntactic) algebra while algebra are abstract (semantic) calculi. Viewed as a mathematical statement we can simply say that an algebra is model of a calculus in the model theoretical sense, and conversely that a calculus is a concrete realization of some behavior it is trying to model. Moreover, a given calculus generally has many semantic models and conversely a given algebra can generally be realized by many calculi. • Modeling types with calculi and algebra. Algebra can be used to specify behavior while calculi can used to specify rules for computation. Since individual types are forms of behavior they are appropriately specified by algebra. However, relationship among types are forms of behavior they are appropriately specified by algebra. Multi-sorted algebra provide a basis for modeling properties of individual types while the second-order λ-calculus provides a basis for modeling properties of the type system as a whole, such as polymorphism. As the theory of power domains has been used to study the semantics of concurrent system, the theory of super types can be used to study the formal logic frameworks.
2.1.6
Philosophical View of Types
2.1.6.1
Realist View
The realist view of types is that we live in a world populated by values and that types are an explanatory mechanism introduced for the purpose of classifying and managing values. Thus the realist postulates that a global universe exists prior to the things. So the objects of sense perception or sometimes of cognition in general are real in their own right and exist independently of their being
Type Theory
17
known or related. The philosophical position is that “observability depends on existence”. 2.1.6.2
Intuitionist View
The intuitionist view of types is that types are the basic conceptual entities of the domain of discourse and that values exist only in so far as they are constructible from some type. Thus the intuitionist requires existence to be demonstrated by observability or constructibility and believes that natural phenomena as well as artificial mathematics and computational entities exist only when they are observed, constructed, or interpreted. The philosophical position is that “existence depends on observability”. From the mathematical point of view, the following corresponding may be observed in the proof theory: Realism Intuitionism
←→ Existence P roof ←→ Constructive P roof
2.1.7
Polymorphic Functional Languages
2.1.7.1
Edinburgh ML
A typical polymorphic functional programming language is perhaps the Edinburgh ML defined in LCF (Logic for Computable Functions) system [12]. ML is a polymorphic functional language. Its main features are: • It is fully higher-order, i.e. functions are first-order values and may be passed as arguments, returned as results, or embedded in data structures. • It has a simple, but flexible, mechanism for raising and handling exceptions. • ML has an extensible and completely secure polymorphic type discipline – implicit polymorphism [24]. ML may be written in an essential type-free way and their consistency with respect to typing is automatically checked. 2.1.7.2
HOPE
HOPE [2] is a polymorphically typed higher order recursion equation based functional language. It is a successor to an early first order recursion equation based language NPL [1], that itself grew from the work of program transformation. The first implementation of HOPE was at Edinburgh University. There are now implementations at Bell Laboratories and at Imperial College, London, where it was the initial language behind the design of the parallel graph reduction machine, ALICE [11].
Type Theory
2.1.7.3
18
SASL, KRC, Miranda
Turner [31] has been responsible for a series of higher order functional languages, culminating in Miranda which is a polymorphically typed, higher order, recursion equation based functional language. The basic idea are taken from the earlier languages SASL [29, 27] and KRC [30], with the addition of a type discipline essentially same as that of ML. The Miranda has been implemented on a variety of computers, running under the Unix operating system. • The Miranda programming language is purely functional – there are no side-effects or imperative features of any kind. A program is a collection of equations defining various functions and data structures which are interested in computing. • An equation can have several alternative right-hand sides distinguished by ‘guards’ (a guard is a Boolean expression). • Miranda is a higher-order language – functions are first class citizens and can be both passed as parameters and returned as results. • Miranda’s evaluation mechanism is ‘lazy’, in the sense that no subexpression is evaluated until its value is known to be required. One consequence of this is that it is possible to define functions which are non-strict (meaning that they are capable of returning an answer even if one of their arguments is undefined). • The ZF-expressions (also called list comprehension) in Miranda give a concise syntax for a rather general class of iterations over lists. The notation is adapted from Zermelo-Frankel set theory. • Miranda is strongly typed. That is, every expression and every subexpression has a type, which can be deduced at compile-time, and any inconsistency in the type structure of a program results in compile-time error message. Types in Miranda can be polymorphic as in ML. User can define their own types using equations and the user-defined types can also be polymorphic.
2.2
What is PowerEpsilon
The languages used by mathematicians for describing mathematics are rarely said to be formal. It is an usual case that different persons use the different notations to denote the same thing or use the same notations to denote the different thing. All mathematical books were written in very ambiguous natural languages and have used different terminology. The students moving from one school to another are often shocked by the fact that the classes are taught in a completely different way as they used to be. What we need is a formal language
Type Theory
19
for describing mathematical notations, concepts and proofs so that the students can be educated and the mathematicians can communicated with each others in a unique and rigorous way. The PowerEpsilon [38, 39, 40, 41, 36, 37], was introduced as a system for formalizing constructive mathematics. The system may be viewed as the λ-calculus associated with natural deduction proofs in an extension of Church’s higher-order logic [7]. PowerEpsilon is a strongly-typed polymorphic functional programming language based on Martin-L¨of’s type theory [23] and the Calculus of Constructions (CC) [9, 10]. The system can be used as both a programming language with a very rich set of data structures and a metalanguage for formalizing constructive mathematics. The system has been implemented using the software development system AUTOSTAR constructed by author [35]. PowerEpsilon is a proof checker much similar to other mechanical proof checkers, such as LCF [12] and Nuprl [8], which are completely formal usercontrolled systems. However, PowerEpsilon is more powerful than LCF and Nuprl, in which the equality and induction rules for arbitrary inductive types are definable.
2.3
Syntax of PowerEpsilon
2.3.1
Notations
For technical reasons the following notations are used in PowerEpsilon: • ! : for the universal quantifier ∀. • ? : for the existential quantifier ∃. • \ : for the λ-abstraction quantifier λ.
2.3.2
Lexicon
Since the limited usage of computer character set (ASCII codes only), we can use a variety of letters, both lowercase (a, b, . . . ), uppercase (A, B, . . . ), and even digits (0, 1, 2, . . . ). However, subscript and superscript letters, and Greek letters usually used in mathematics are not allowed. Comments are enclosed in between % and %.
2.3.3
Syntax
A program written in PowerEpsilon is a theory with an optional query part: program ::= theory {query} theory ::=
Type Theory
20
theory identifier is {import} term decl list end; query ::= query import term list end; A theory is a syntactic unit in PowerEpsilon, which has no semantic meaning at all; the only role played is to syntactically wrap a set of closely related terms (propositions and proofs) into one package. The purpose of introduction of ‘theory’ is to make it easy to manage the proof environment for PowerEpsilon. For instance, Nat is a theory that wraps all the propositions and functions for specifying properties of natural numbers. The query part describes the expressions to be evaluated. Terms in a theory or query may be dependent on other theories. This dependency is specified by a import clause. import ::= import ident list ident list ::= identifier {, identifier}∗ The main part of a theory is a sequence of term declarations. A term declaration is either a term specification or a term definition. A term specification specifies the type of a term. The specification of a term is optional. In the absence of such a specification, the term definition acts as the specification. There is a type inference system which can derive the specification of a term from its definition. If both a specification and a definition for a term are given, the term definition must conform to the term specification. term term term term
decl list ::= term decl {; term decl}∗ decl ::= term dec | term def dec ::= dec identifier : term def ::= def identifier = term
The basic expressions of PowerEpsilon, called terms, are defined as follows:
Type Theory
21
term ::= identifier | Prop | Type(natural) | Kind | (term) | ! (identifier : term {, identifier : term}∗ ) term | ?(identifier : term {, identifier : term}∗ ) term | \(identifier : term {, identifier : term}∗ ) term | [term -¿ term {-¿ term}∗ ] | @(term, term {, term}∗ ) | | let identifier = term in term | lec identifier : term in term
2.3.4
Currying and Higher-Order Terms
PowerEpsilon is a higher-order language – functions types are first class citizens and can be both passed as parameters and return as results. Function application is left-associative, so when we write @(f, x, y) it is passed as @(@(f, x), y), meaning that the result of applying f to x is a function, which is then applied to y. Functions of more than one argument which take them “one at a time” like f are called currying. The curring form of λ-expressions provides some flexibility which is important for writing easily understood programs. For example, \(x1 : A1 ) (\(x2 : A2 ) · · · (\(xn : An )M ) · · ·) can be written \(x1 : A1 ) \(x2 : A2 ) · · · \(xn : An )M, or further simplified to \(x1 : A1 , x2 : A2 , · · · , xn : An )M. !-terms and ?-terms are treated similarly. In addition, the following terms [A1 −> A2 −> · · · −> An−1 −> An ] @(A1 , A2 , A3 , ; · · · , An ) < A1 , A2 , · · · , An−1 , An > are respectively abbreviations for [A1 −> [A2 −> · · · [An−1 −> An ] · · · ; ]] @(· · · @(@(A1 , A2 ), A3 ), · · · , An ) < A1 , < A2 , · · · < An−1 , An > · · · >>
Type Theory
22
Unlike function application which is left-associative, all of other constructs are right-associative. The dec and def constructs are sugared λ-expressions which can be decoded as an explicit on-line form of λ-expression. For instance, if a sequence of defconstructs are given as follows: def S = \(T : Prop) \(x : [T -> T -> T], y : [T -> T], z : T) @(x, z, @(y, z)); def K = \(T : Prop) \(x : T, y : T) x; def I = \(T : Prop) \(x : T) x; ...... These λ-expressions can be decoded as \(S : \(T : \(x : @(x, \(K : \(T : \(I : \(T : ......
Prop) [T -> T -> T], y : [T -> T], z : T) z, @(y, z))) Prop) \(x : T, y : T) x) Prop) \(x : T) x)
In PowerEpsilon, expressions and types have same forms of representation, for example, the term \(x : T) E can be interpreted as either an expression or a type depending on the role played by that this term in context. Current version of PowerEpsilon has neither built-in functions nor built-in types. Everything must be constructed from bottom. The recursive functions can be defined using both dec and def constructs. There are two kind of variables: identifiers and meta-variables. An identifier is a string of letters or digits beginning with a letter. A meta-variable is an identifier beginning with a character “*”. The meta-variables are specially used for defining inductively defined types.
Type Theory
2.3.5
23
Abstract Syntax
The abstract syntax of basic expressions in PowerEpsilon are defined as follows: T
::=
Prop
| |
Type(i) (i ∈ ω) Kind
| | |
x ! (x : T1 )T2 ? (x : T1 )T2
| |
\ (x : T1 )T2 [T1 −>T2 ]
| | |
@(T1 , T2 ) < T1 , T2 > let x = T1 in T2
|
lec x : T1 in T2
The kinds Prop, Type(i), and Kind are called type universes. Every kind is assigned a number as its level: LEVEL(Prop)
=
LEVEL(Type(i)) = LEVEL(Kind) =
−1; i; ω.
The semantic domains of PowerEpsilon can be classified into three categories. The first is the set of terms. Intuitively, terms are the “ordinary expressions” that describe computable functions and results of computation. The second category contains the types of terms. The third class is kind which is used to describe the functionality of subexpressions of types. Essentially, kinds are “types” of things that appear in types. In PowerEpsilon there are no syntactic distinction between terms, types and kinds, all things are represented as terms. The semantic classification for any given term can be determined by type inference system. Terms are defined inductively as follows: Prop, Type(i) for any natural number i and Kind are term; any variable x is a term; if t1 and t2 are terms, then @(t1, t2), and [t1 -> t2] are terms; if x is a variable, T1 and T2 are a terms, then \(x : T1) T2, !(x : T1) T2, ?(x : T1) T2, let x = T1 in T2 and lec x : T1 in T2 are a term, where the symbol \ is the λ-abstraction symbol λ, ! is the universal quantifier ∀ and ? is the existential quantifier ∃. Interactions with PowerEpsilon is centered on the theory. A theory contains a set of subtheories and an ordered collection of definitions, theorems
Type Theory
24
and objects. A checked theory is retained by the system and can be later referred to by the users. The definition of a term, type or kind t with body b is given in the form of “def t = b”. The declaration for a term, type or kind t with the declaration T is given in the form “dec t : T”. In a formal theory, the axioms and assumptions can be described using declaration-constructs, and the proofs of propositions, lemmas and theorems will be represented using definition-constructs.
2.4
Type Inference Rules of PowerEpsilon
We now describe the judgment form and the inference rules of PowerEpsilon.
2.4.1
Contexts
A context Γ is a list of bindings of the form x:A, where x is a variable and A is a term. The empty context is denoted by . The set of free variables in a context Γ ≡ x1 : A1 , · · · , xn : An , is defined as ∪ ∪ ({xi } FV(Ai )) FV(Γ) = 1≤i≤n
2.4.2
Judgements
A judgment is a form Γ⊢M:A where Γ is a context, and M and A are terms. The intuitive meaning of the judgment is that M has type A in context Γ. We write ⊢ M : A for ⊢ M : A.
2.4.3
Inference Rules
The type inference rules are listed in Figure 2.1 and Figure 2.2. In the rules, Γ is assumed to be a valid context, where K and K’ stand for arbitrary kinds, i, j and k for natural numbers. There is a special treatment of the formation rule for the strong sum types (the existential quantifier formula) ?(x : A) B for preventing the logically inconsistency of in system, where the objects in Prop hierarchy are lifted to the Type(0) hierarchy, the objects in Type(i) hierarchy are lifted to the Type(i+1) hierarchy (i ∈ ω), and the types of form ?(x : Kind) @(P, x) are not allowed. Since adding type-indexed strong sums directly to the proposition level Prop of the system results in a logically inconsistent system in which Girard’s paradox can be derived. Furthermore, the type ?(x : Kind) @(P, x) formally does not have any logical meaning, since Kind is merely used for specifying generic type hierarchies in the system.
Type Theory
25
(Ax)
(Variable)
(Constant)
Γ, x : A, Γ′ ⊢ Prop : Type(0) Γ, x : A, Γ′ ⊢ x : A
Γ ⊢A : K (x ̸∈ FV(Γ)) Γ, x : A ⊢ Prop : Type(0)
(Type)
(Π − 1)
(Π − 2)
⊢ Prop : Type(0)
Γ ⊢ Prop : Type(0) Γ ⊢ Type(i) : Type(i + 1) Γ ⊢A : K; Γ, x : A ⊢P : Prop Γ ⊢ !(x : A)P : Prop
Γ ⊢A : K; Γ, x : A ⊢B : Type(j) (k = max{LEVEL(K), j}) Γ ⊢ !(x : A)B : Type(k) where K ≡ Prop or Type(i), (i ∈ ω)
Figure 2.1: Type Inference Rules for PowerEpsilon
Type Theory
26
(−> − 1)
(−> − 2)
Γ ⊢A : K; Γ ⊢B : Type(j) (k = max{LEVEL(K), j}) Γ ⊢ [A−>B] : Type(k) where K ≡ Prop or Type(i), (i ∈ ω) (λ − Abst)
Γ, x : A ⊢M : B; Γ, x : A ⊢B : K Γ ⊢ \(x : A)M :!(x : A)B
(App − 1)
Γ ⊢M :!(x : A)B; Γ ⊢N : A′ (A′ ≪ A) Γ ⊢ @(M, N) : B[N/x]
(App − 2)
Γ ⊢M : [A−>B]; Γ ⊢N : A′ (A′ ≪ A) Γ ⊢ @(M, N) : B
(App − 3)
(App − 4)
(Σ)
Γ, A : K ⊢P : Prop Γ ⊢ [A−>P] : Prop
Γ ⊢M ≡ \(x : Kind)M′ ; Γ ⊢N ≡ Prop; Γ ⊢M′ [N/x] : A Γ ⊢ @(M, N) ≡ M′ [N/x] : A Γ ⊢M ≡ \(x : Kind)M′ ; Γ ⊢N ≡ Type(i); Γ ⊢M′ [N/x] : A Γ ⊢ @(M, N) ≡ M′ [N/x] : A
Γ ⊢A : K; Γ, x : A ⊢B : K′ (k = max{0, LEVEL(K), LEVEL(K′ )}) Γ ⊢ ?(x : A)B : Type(k) where K ≡ Prop or Type(i), K′ ≡ Prop or Type(j), (i, j ∈ ω)
(Pair)
(Proj − 1)
Γ ⊢M : A′ ; Γ ⊢N : B′ ; Γ, x : A ⊢B : K (A′ ≪ A, B′ ≪ B[M/x]) Γ ⊢ < M, N >:?(x : A)B Γ ⊢ M :?(x : A)B Γ ⊢ M :?(x : A)B (Proj − 2) Γ ⊢ @(FST, M) : A Γ ⊢ @(SND, M) : B[@(FST, M)/x]
(Conversion)
(Cumulativity)
Γ ⊢M : A; Γ ⊢A′ : K (A ≡ A′ ) Γ ⊢M : A′ Γ ⊢M : A; Γ ⊢A′ : K (A ≪ A′ ) Γ ⊢M : A′
Figure 2.2: More on Type Inference Rules for PowerEpsilon
Type Theory
2.4.4
27
Derivations
A derivation of a judgment J is a finite sequence of judgements J 1 , · · · , Jn with Jn ≡ J such that, for all 1 ≤ i ≤ n, Ji is the consequence of some instance of an inference rule whose premises are in {Jk | k < i}. A judgment J is derivable if there is a derivation of J.
2.4.5
Γ-Terms, Γ-Types and Γ-Propositions
A term M is called a Γ-term (or well-typed term under Γ) if Γ ⊢M:T for some T. A term T is called a Γ-type if Γ ⊢T:K for some kind K. A Γ-type T is called a Γ-proposition if Γ ⊢T’: Prop for some T′ ≡ T and called a proper Γ-type otherwise.
2.4.6
Type Conversion Rules
We now introduce another kind of judgment Γ ⊢M ≡ N, whose intuitive meaning is that the terms M and N denote the same object. Here ≡ is the smallest congruence over propositions and contexts containing β-conversion. Figure 2.3 summarizes the definition of type conversion rules.
2.4.7
Cumulativity Relation
The type inclusions between the universes induce the type cumulativity that is syntactically characterized by the cumulativity relation ≪. The binary relations ≪i (i ∈ ω) over terms are inductively defined as follows: 1. A ≪0 B if and only if one of the following holds: (a) A ≡ B; (b) A ≡ Prop and B ≡ Type(i) for some i ∈ ω, or A ≡ Type(i) and B ≡ Type(j) for some i < j; (c) A ≡ Prop and B ≡ Kind, or A ≡ Type(i) and B ≡ Kind for some i ∈ ω. 2. A ≪i+1 B if and only if one of the following holds: (a) A ≪i B; (b) A ≡ !(x : A1)A2 and B ≡ !(x : B1)B2 for some A1 ≪i B1 and A2 ≪i B2; (c) A ≡ \(x : A1)A2 and B ≡ \(x : B1)B2 for some A1 ≪i B1 and A2 ≪i B2; (d) A ≡ ?(x : A1)A2 and B ≡ ?(x : B1)B2 for some A1 ≪i B1 and A2 ≪i B2; (e) A ≡ [A1−>A2] and B ≡ [B1−>B2] for some A1 ≪i B1 and A2 ≪i B2; (f) A ≡ @(A1, A2) and B ≡ @(B1, B2) for some A1 ≪i B1 and A2 ≪i B2;
Type Theory
28
Γ ⊢M : P; Γ ⊢P : Prop; Γ ⊢P ≡ Q (Type − Equality1) Γ ⊢M : Q Γ ⊢M : P; Γ ⊢P : Type(i); Γ ⊢P ≡ Q (Type − Equality2) Γ ⊢M : Q Γ ⊢M : P; Γ ⊢P : Kind; Γ ⊢P ≡ Q (Type − Equality3) Γ ⊢M : Q Γ ⊢M : N (Reflectivity) Γ ⊢M ≡ M
Γ ⊢M ≡ N (Symmetry) Γ ⊢N ≡ M
Γ ⊢M ≡ N; Γ ⊢N ≡ P (Transitivity) Γ ⊢M ≡ P Γ ⊢P1 ≡ P2; Γ, x : P1 ⊢M1 ≡ M2 (Abst − Equality) Γ ⊢ \(x : P1)M1 ≡ \(x : P2)M2 Γ ⊢P1 ≡ P2; Γ, x : P1 ⊢M1 ≡ M2 (π − Equality) Γ ⊢ !(x : P1)M1 ≡ !(x : P2)M2 Γ ⊢P1 ≡ P2; Γ, x : P1 ⊢M1 ≡ M2 (Σ − Equality) Γ ⊢ ?(x : P1)M1 ≡ ?(x : P2)M2 Γ ⊢P1 ≡ P2; Γ ⊢M1 ≡ M2 (−> − Equality) Γ ⊢ [P1−>M1] ≡ [P2−>M2] Γ ⊢ @(M, N) : P; Γ ⊢M ≡ M1; Γ ⊢N ≡ N1 (App − Equality) Γ ⊢ @(M, N) ≡ @(M1, N1) Γ ⊢ < M, N > : P; Γ ⊢M ≡ M1; Γ ⊢N ≡ N1 (Pair − Equality) Γ ⊢ < M, N > ≡ < M1, N1 > Γ, x : A ⊢M : P; Γ ⊢N : A (β − conversion) Γ ⊢ @(\(x : A)M, N) ≡ M[N/x] Meta − Var
Γ ⊢ ∗a : A; Γ ⊢ b : B; Γ ⊢ A ≡ B Γ ⊢ ∗a ≡ b
Figure 2.3: Type Conversion Rules for PowerEpsilon
Type Theory
29
(g) A ≡ < A1, A2 > and B ≡ < B1, B2 > for some A1 ≪i B1 and A2 ≪i B2; The additional cumulativity relation ≪ defined for Kind over variables of type Kind are given as follows: 1. X ≪ Prop if X is a variable and X : Kind; 2. X ≪ Type(i) if X is a variable and X : Kind; 3. X ≪ Y if X and Y are variables, and X : Kind and Y : Kind; In connection with the rules App - 3 and App - 4, this will consists of a complete lazy type evaluation mechanism for Kind. With the definition given above, the cumulativity relation ≪ is defined as follows: ∪ def ≪ = ≪i i
2.4.8
Curry-Howard Isomorphism
One of the basic ideas behind constructive type theory is the Curry-Howard interpretation of propositions as types and proofs as programs. This view of propositions is related both to Heyting’s explanation of intuitionistic logic and, on a more formal level, to Kleene’s realizability interpretation of intuitionistic arithmetic. The Curry-Howard [18] isomorphism establishes the relationship between the calculus of deductions in the higher-order intuitionistic predicate logic and the calculus of typed terms in the polymorphic λ-calculus. The isomorphism maps • Higher-order predicate logic to corresponding types. • Deduction of predicates to terms of the corresponding type. This correspondence between propositions and types is a powerful paradigm for mechanical proof construction and computer programming. The judgment or type assignment “a : A” in PowerEpsilon can be read in at least the four following ways: • a is an element of type A. • a is a proof object for the proposition A. • a is a program satisfying the specification A. • a is a solution to the problem A.
Type Theory
30
In classical mathematics, a proposition is thought of as being true or false independently of whether we can prove or disprove it. On the other hand, a proposition is constructively true only if we have a method to prove it. For example, classically the law of excluded middle, A ∨ (¬A), is true, since the proposition A is either true or false. Constructively, however, a disjunction is true only if we can prove one of the disjuncts. Since we have no method of proving or disproving an arbitrary proposition A, we have no proof of A ∨ (¬A) and therefore the law of excluded middle is not constructively valid. We may establish the following identifications between propositions and types. • A proof of type [A -> B] is a function (method, program) which to each proof of A gives a proof of B. • A proof of type @(Product, A, B) is a pair whose first component is a proof A and whose second component is a proof of B. • A proof of type @(Sum, A, B) is either a proof A or a proof of B together with the information of which of A or B we have a proof. The elements in @(Sum, A, B) are of the form @(INJ1, A, B, a) and @(INJ2, A, B, b) where a : A and b : B. • A proof of type !(x : A) @(B, x) is a function (method, program) which to each element a of type A gives a proof of @(B, a). The type corresponding to this is the Cartesian product of a family of types. The elements of this type are functions which, when applied to an element a in the type A gives an element in the type @(B, a). • A proof of type ?(x : A) @(B, x) is a pair whose first component a is an element of type A and whose second component is a proof of @(B, a). The type corresponding to this is the disjoint union of a family of types. The elements of this type are pairs where a : A and b : @(B, a). From the points of view for computer programming, a proposition could be interpreted as the specifications of the task of a program in the following way. • @(Product, A, B) is a specification of programs which, when executed, yield a pair @(PRODUCT, A, B, a, b), where a is a program for the task A and b is a program for the task B. • @(Sum, A, B) is a specification of programs which, when executed, either yields @(INJ1, A, B, a) or @(INJ2, A, B, b), where a is a program for A and b is a program for B. • [A -> B] is a specification of programs which, when executed, yields \(x : A) b, where b is a program for B under the assumption that x is a program for A.
Type Theory
31
• !(x : A) @(B, x) is a specification of programs which, when executed, yields \(x : A) b, where b is a program for @(B, x) under the assumption that x is an object of A. This means that when a program for the problem !(x : A) @(B, x) is applied to an arbitrary object x of A, the result will be the program for @(B, x). • ?(x : A) @(B, x) is a specification of programs which, when executed, yields , where a is an object of A and b is a program for @(B, a). So, to solve the task ?(x : A) @(B, x) it is necessary to find a method which yields an object a in A and a program for @(B, a). There are two ways to manipulate types, one way is to construct programs from types and another is to deduce type structures from the corresponding programs. The former is corresponding to program derivation from specification and the later is corresponding to program verification. 2.4.8.1
From Types to Programs: The Constructive Mathematics
The extraction of programs from proofs or programming in constructive logic is based on the idea that under some restrictions proofs can be considered as programs. The general scheme of extracting programs from proofs is the following: at the beginning one writes a specification of the problem in some formal language (for example, the PowerEpsilon). Then a formal proof of this specification is constructed and the program is extracted from this proof according to one of the known methods. Because proof construction is not decidable, strategic information has to be provided by users. 2.4.8.2
From Programs to Types: The Type Inferences
The type inference mechanism of PowerEpsilon can be used mathematically as a proof editor to check whether a proof for a mathematical theorem is written correctly.
2.4.9
Applications of PowerEpsilon
There are many different ways that PowerEpsilon can be used. Standardization of Mathematics The system can be used as a constructive logic to develop mathematical proofs. Usually, the papers submmited to mathematical journals contain a large amount of mathematical theorems and proofs. Those theorems and proofs often have two problems. First, since the space limitation, the proofs are incomplete presented. Secondly, most proofs contain errors. To judge the correctness of those papers left a heavy burden to the referees. The problems may be solved by making, in certain degree, the standardization of mathematics and mechanization of
Type Theory
32
proof checking much similar to the standardization and mechanization of programming languages processing we have made in software engineering. Functional Language The system can be seen as a functional language, with a number of novel features, such as that every expression has a defined value; every program terminates; the system of types is more expressive than those in common use as it does dependent product and function spaces; and the language is integrated with a logic which to reason about the programs. Formal Program Development System The system can be used as a tool for formal program derivation where programs are extracted from constructive proofs. Formal Program Verification System The system can be used as a tool for program verification, where the programs are developed first and then their correctness are proved. This shows that we have therefore a system in which programming and verification are integrated. Program Transformation System The system can also be used to support program transformation. The essence of program transformation is to take a program into another with same application behaviour, yet improved in some aspect such as time efficiency or space usage. Two functions have the same behaviours when they return the same the same results for all arguments, when they are extensionally equal, in other words. In PowerEpsilon, program transformation will therefore involve the replacement of an object by an extensionally equivalent one, through a series of simple steps of the same kind. From the point of view of constructive logic, program transformation can be considered as the translation from one type to another. Therefore, PowerEpsilon can be used as a meta-language to study the general strategies and schemes of program transformations. Decision Support System PowerEpsilon can be used as a decision support system based on the explanation that decision making can be viewed as a theorem proving process, where for a given problem, we have to prove whether there is a solution to this problem. A typical decision making process can be expressed at least as the following two propositions: !(p : Problem) ?(s : Solution) @(Solve, s, p);
or for a specific SOLUTION of type Solution, !(p : Problem) @(Solve, SOLUTION, p);
Type Theory
33
for the first formula we have to decide for every problem p whether we can find a solution s which solves p and for the second formula we have to decide for every problem p where a specific method SOLUTION solves p. Knowledge Base System PowerEpsilon can be used as a knowledge base system. A formalism for representing knowledge should at least be capable of fulfilling the following five criteria: 1. Say that something has a certain property without saying what thing has that property. For example, to say John has a sister without saying that who she is. Formally speaking we have: ?(x : Person) @(Sister, John, x). 2. Say that everything in a certain class has a certain property without saying that what everything in that class is. For example, to say that all dogs are mammals without listing all the dogs. Formally speaking we have !(x : Animal) [@(Dog, x) -> @(Mammal, x)]. 3. Say that at least one of two statements is true without saying which is true. For example, to say that John teaches computer science 101 or 102 without specifying which of these courses he actually does teach. Formally speaking we have @(Or, @(Teach, John, CSC101), @(Teach, John, CSC102)).
4. Explicitly say that something is false. For example, John has no sister. Formally speaking, @(Not, ?(x : Person) @(Sister, John, x)). 5. Either settle or leave open to doubt whether two non-identical expressions name the same object. For example, @(FATHER, John) and Bill are two non-identical expressions. They may or may not denote the same individual. If they do, we can explicitly say so by @(Equal, Person, @(FATHER, John), Bill). If there is doubt about their naming the same individual, we omit the above equality. This omission does not preclude their naming the same individual; it simply leaves the knowledge base agnostic about this possibility. PowerEpsilon easily satisfies these five criteria and therefore is a good candidate for representing knowledge. Beside knowledge representation, PowerEpsilon provides powerful mechanism for knowledge manipulation including natural deduction and consistency checking. The reflection mechanism of PowerEpsilon provide a flexible way to adjust system itself to a specific application dynamically and autonomously. Computer-Aided Education Computer-aided instruction has many advantages. When well programmed, a computer can relieve teachers of giving
Type Theory
34
and correcting drill exercises, even of teaching many manipulative techniques. Students who react negatively to a teacher can turn to the computer; it instructs without emotional involvement and gives instant, impartial feedback. It shows no impatience no matter how often a student repeats a mistake. The computer’s greatest asset is that it forces the student to become an active participant. For most of students, mathematical theorem proving is a hard job. PowerEpsilon can provide a lot of assistance in this area. Tools for Studying Semantics of Programming Languages Since its powerful type system, PowerEpsilon can be used as a formal tool to study semantics of programming languages.
2.5
Formal Logic in PowerEpsilon
2.5.1
Logical Operators Defined by PowerEpsilon
2.5.1.1
Logical Implication
If A and B are two predicates, then [A -> B] represents the logical implication; its antecedent is A and its consequent is B. 2.5.1.2
Free and Bound Variables
A variable x of type T is said to be free in an expression e if x occurs in e and !(x : T) does not occur in e. We say that x is bound in e if !(x : T) occurs in e. 2.5.1.3
Universal Quantified Formula
If A and B are two predicates and x is a free variable in B, then !(x : a formula. 2.5.1.4
Existential Quantified Formula
If A and B are two predicates and x is a free variable in B, then ?(x : a formula.
2.5.2
A) B is
A) B is
Logical Operators Derived from PowerEpsilon
Four logical operators are defined over predicates Prop and type universe Type(i) where i = 0, 1, . . ..
Type Theory
2.5.2.1
35
Logical Negation
If B is a predicate, then @(Not, B) is a predicate too. dec dec dec dec dec dec
2.5.2.2
Not Not0 Not1 Not2 Not3 Not4
: : : : : :
[Prop -> [Type(0) [Type(1) [Type(2) [Type(3) [Type(4)
Prop]; -> Prop]; -> Prop]; -> Prop]; -> Prop]; -> Prop];
Logical Conjunction
If A and B are two predicates, then @(And, A, B) represents the logical conjunction; its operands A and B is called conjuncts. dec dec dec dec dec dec
And And0 And1 And2 And3 And4
: : : : : :
[Prop -> [Type(0) [Type(1) [Type(2) [Type(3) [Type(4)
Prop -> Prop]; -> Type(0) -> Prop]; -> Type(1) -> Prop]; -> Type(2) -> Prop]; -> Type(3) -> Prop]; -> Type(4) -> Prop];
PRODUCT is the constructor of And. dec PRODUCT : !(A : Prop, B : Prop) [A -> B -> @(And, A, B)]; PROJ1 and PROJ2 are two projection functions of And. dec PROJ1 dec PROJ2
: !(A : Prop, B : Prop) [@(And, A, B) -> A]; : !(A : Prop, B : Prop) [@(And, A, B) -> B];
Similarly, PRODUCT0 is the constructor of And0. dec PRODUCT0 : !(A : Type(0), B : Type(0)) [A -> B -> @(And0, A, B)];
and PROJ01 and PROJ02 are two projection functions of And0. dec PROJ01 : !(A : Type(0), B : Type(0)) [@(And0, A, B) -> A]; dec PROJ02 : !(A : Type(0), B : Type(0)) [@(And0, A, B) -> B];
Type Theory
2.5.2.3
36
Logical Disjunction
If A and B are two predicates, then @(Or, A, B) represents the logical disjunction; its operands A and B is called disjuncts. dec dec dec dec dec
Or Or0 Or1 Or2 Or3
: : : : :
[Prop -> [Type(0) [Type(1) [Type(2) [Type(3)
Prop -> Prop]; -> Type(0) -> Prop]; -> Type(1) -> Prop]; -> Type(2) -> Prop]; -> Type(3) -> Prop];
INJ1 and INJ2 are two constructors of Or. dec INJ1 : !(A : Prop, B : Prop) [A -> @(Or, A, B)]; dec INJ2 : !(A : Prop, B : Prop) [B -> @(Or, A, B)]; WHEN is the projection function of Or. dec WHEN : !(U : Prop, V : Prop, W : Prop) [@(Or, U, V) -> [U -> W] -> [V -> W] -> W]; INJ01 and INJ02 are two constructors of Or0. dec INJ01 : !(A : Type(0), B : Type(0)) [A -> @(Or0, A, B)]; dec INJ02 : !(A : Type(0), B : Type(0)) [B -> @(Or0, A, B)]; WHEN0 is the projection function of Or0. dec WHEN0 : !(U : Type(0), V : Type(0), W : Type(0)) [@(Or0, U, V) -> [U -> W] -> [V -> W] -> W];
dec Or30 : [Type(0) -> Type(0) -> Type(0) -> Prop]; INJ301 and INJ302 and INJ303 are three constructors of Or30. dec INJ301 : !(A : Type(0), B : Type(0), C : Type(0)) [A -> @(Or30, A, B, C)]; dec INJ302 : !(A : Type(0), B : Type(0), C : Type(0)) [B -> @(Or30, A, B, C)];
Type Theory
37
dec INJ303 : !(A : Type(0), B : Type(0), C : Type(0)) [C -> @(Or30, A, B, C)];
WHEN30 is the projection function of Or30. dec WHEN30 : !(U : Type(0), V : Type(0), Z : Type(0), W : Type(0)) [@(Or30, U, V, Z) -> [U -> W] -> [V -> W] -> [Z -> W] -> W];
2.5.2.4
Equivalence Relation
Two formulas A and B are said to be equivalent if and only if [A -> B] and [B -> A]. def def def def def
2.5.3
Eq Eq0 Eq1 Eq2 Eq3
= = = = =
\(A \(A \(A \(A \(A
: : : : :
Prop, Type(0), Type(1), Type(2), Type(3),
B B B B B
: : : : :
Prop) Type(0)) Type(1)) Type(2)) Type(3))
@(And, @(And0, @(And1, @(And2, @(And3,
[A [A [A [A [A
-> -> -> -> ->
B], B], B], B], B],
[B [B [B [B [B
-> -> -> -> ->
A]); A]); A]); A]); A]);
Proof by Contradiction
In a proof by contradiction, one begins by supposing that the assertion we wish to prove is false. Then we can feel free to use the negation of what we are trying to prove as one of the initial statements in constructing a proof. In a proof by contradiction we look for a pair of statements developed in the course of the proof which contradict one another. Since both cannot be true, we have to conclude that our original supposition was wrong and therefore that our desired conclusion is correct. The possibility of extending the Curry-Howard isomorphism to classical logic is a significant discovery of the decade. It has consequences both from the point of view of mathematical logic and from the point of view of computer science. We get classical logic from intuitionistic logic by adding the axiom ¬¬A−>A for any proposition A. The computational counterpart of this axiom has been very recently understood. It corresponds to a control operator such as the operator C defined by M. Felleisen in 1986. This kind of operator, with appropriated reduction rules, allows to add imperative features (for instance goto, exception handling, ..., but not assignments) to purely functional languages.
Type Theory
38
A good candidate for implementation is probably by λµ-calculus of M. Parigot. It is a variant of the λ-calculus + C but has more elegant reduction rules. Now, if we consider classical logic without implication, it becomes possible to identify the formula ¬¬A with A. This has led to λSym -calculus, a nice restriction of λ-calculus defined with non-deterministic computational rules. Another approach is to interpret proofs as winning strategies for a two-player game in which the rules depend on the disputed formula. This view emphasizes a dynamic analysis of computation, strongly related to weak-head-style reduction. Especially, in this framework, we can define computation algorithms radically different from the usual computation algorithm based on λ-calculus, but with a good behaviour on some examples of classical proofs. dec Excl_Mid : !(A : Type(0), a : A, b : A) @(Sum2, @(Equal0, A, a, b), @(Not, @(Equal0, A, a, b)));
2.5.4
Informal and Formal Theorem Proving
By informal mathematical theorem proving, we mean the proof is described in a natural language. As a comparison study, let us take an example. There is no need to say that the theorem and its proof described in natural language and PowerEpsilon are completely different from each other. Of course, the theorem and proof written in natural language is much easier to understand. While its PowerEpsilon counterpart is much more clear and precise. Also the description of theorem and proof in PowerEpsilon is longer than in natural language.
2.6
General Schema of Investigating Algbraic Structures
The algebraic presentation for a variety of algebraic structures such as semigroups, groups, rings and fields in terms of PowerEpsilon is generally configured as follows: 1. Definition of algebraic structure. 2. Subalgebra of an algbraic structure. 3. Finite algebra of an algbraic structure. 4. Isomorphism of algebraic structures. 5. Homomorphism of algebraic structures.
Type Theory
2.7
39
Functions
Generally, if A and B are sets, a function f : [A -> B] on A to B is a rule which assigns to each element a in A an element @(f, a) in B. A function f : [A -> B] is also called a mapping, a transformation, or a correspondence from A to B. The set A is called the domain of the function f, and B its codomain. The image (or “range”) of a function f : [A -> B] is the set of all the “values” of the function; that is, all @(f, a) for a in A. The image is a subset of the codomain B, but need not be all of B.
2.7.1
Injection, Surjection and Bijection
A function f : @(f, b).
[A -> B] is called into when a = b always implies @(f, a) =
def Into = \(A1 : Type(0), A2 : Type(0), f : [A1 -> A2]) !(a1 : A1, b1 : A1) [@(Equal0, A1, a1, b1) -> @(Equal0, A2, @(f, a1), @(f, b1))]; def Into1 = \(A1 : Type(1), A2 : Type(1), f : [A1 -> A2]) !(a1 : A1, b1 : A1) [@(Equal1, A1, a1, b1) -> @(Equal1, A2, @(f, a1), @(f, b1))];
A function f : [A -> B] is called surjection (or onto) when every element b ∈ B is in the image – that is, when the image is the whole codomain. def Surjection = \(A1 : Type(0), A2 : Type(0), f : [A1 -> A2]) let p = \(a1 : A1, a2 : A2) @(Equal0, A2, a2, @(f, a1)) in !(a2 : A2) ?(a1 : A1) @(p, a1, a2); def SSurjection = \(A1 : Type(0), A2 : Type(0), S : [A2 -> Type(0)], f : [A1 -> A2]) let p = \(a1 : A1, a2 : A2) @(Equal0, A2, a2, @(f, a1)) in !(a2 : A2) [@(S, a2) -> ?(a1 : A1) @(p, a1, a2)]; def Surjection1 = \(A1 : Type(1), A2 : Type(1), f : [A1 -> A2]) let p = \(a1 : A1, a2 : A2) @(Equal1, A2, a2, @(f, a1)) in !(a2 : A2) ?(a1 : A1) @(p, a1, a2);
A function f : [A -> B] is called one-one when every element b ∈ B is in the image – that is, when the image is the whole codomain.
Type Theory
40
def OneToOne = \(A1 : Type(0), A2 : Type(0), f : [A1 -> A2]) !(a1 : A1, b1 : A1) [@(Equal0, A2, @(f, a1), @(f, b1)) -> @(Equal0, A1, a1, b1)]; def OneToOne1 = \(A1 : Type(1), A2 : Type(1), f : [A1 -> A2]) !(a1 : A1, b1 : A1) [@(Equal1, A2, @(f, a1), @(f, b1)) -> @(Equal1, A1, a1, b1)];
A function f :
[A -> B] is an injection when it is both one-one and into.
def Injection = \(A1 : Type(0), A2 : Type(0), f : [A1 -> A2]) @(And, @(Into, A1, A2, f), @(OneToOne, A1, A2, f)); def Injection1 = \(A1 : Type(1), A2 : Type(1), f : [A1 -> A2]) @(And1, @(Into1, A1, A2, f), @(OneToOne1, A1, A2, f));
A function is called bijection (or bijective, or one-to-one onto) when it is both injective and surjective. def Bijection = \(A1 : Type(0), A2 : Type(0), f : [A1 -> A2]) @(And, @(Surjection, A1, A2, f), @(Injection, A1, A2, f)); def SBijection = \(A1 : Type(0), A2 : Type(0), S : [A2 -> Type(0)], f : [A1 -> A2]) @(And, @(SSurjection, A1, A2, S, f), @(Injection, A1, A2, f)); def Bijection1 = \(A1 : Type(1), A2 : Type(1), f : [A1 -> A2]) @(And1, @(Surjection1, A1, A2, f), @(Injection1, A1, A2, f));
2.7.2
Binary Operations
Operations on pairs of numbers arise in many contexts – the addition of two integers, the multiplication of two real numbers, the subtraction of one integer from another, and the like. In such cases we speak of a binary operation. In general, a binary operation “◦” on a set S of elements a, b, c, . . . is a rule which assigns to each ordered pair of elements a and b from S a uniquely defined third element c = a ◦ b in the same set S. Here by “uniquely” we mean the
Type Theory
41
substitution property a1 = a2 ∧ b1 = b2 ← a1 ◦ b1 = a2 ◦ b2
2.8
Sets and Operations
A set P on A is nonvoid if and only if there exist an element a in P. def Nonvoid = \(A : Type(0), P : [A -> Type(0)]) ?(a : A) @(P, a);
A set P is said to be included in set Q if and only if for every element a if a is in P, then a is in Q. def Include = \(A : Type(0), P : [A -> Type(0)], Q : [A -> Type(0)]) !(a : A) [@(P, a) -> @(Q, a)];
If P and Q are any subsets of A, the collection of elements c such that c in P and c in Q is called the intersection P ∧ Q of P and Q. def Intersect = \(A : Type(0)) \(P : [A -> Type(0)], Q : [A -> Type(0)]) \(a : A) @(And, @(P, a), @(Q, a)); def Intersect0 = \(A : Type(0)) \(P : [A -> Prop], Q : [A -> Prop]) \(a : A) @(And, @(P, a), @(Q, a));
Similar remarks apply to logical sums of subsets of A. def Union = \(A : Type(0)) \(P : [A -> Type(0)], Q : [A -> Type(0)]) \(a : A) @(Or, @(P, a), @(Q, a)); def Union0 = \(A : Type(0)) \(P : [A -> Prop], Q : [A -> Prop])
Type Theory
42
\(a : A) @(Or, @(P, a), @(Q, a));
Two subsets P and Q are said to be same if and only if for every a of type A we have both that a is in P implies a being in Q and a is in Q implies a being in Q. def Same = \(A : Type(0)) \(P : [A -> Type(0)], Q : [A -> Type(0)]) !(a : A) @(And, [@(P, a) -> @(Q, a)], [@(Q, a) -> @(P, a)]);
@(Same, A) is an equivalence relation on A. The proofs is given as follows: def SameRefl = \(A : Type(0)) \(P : [A -> Type(0)]) \(a : A) let Q = [@(P, a) -> @(P, a)] in let q = \(p : @(P, a)) p in @(ANDS, Q, Q, q, q); def SameSymm = \(A : Type(0)) \(P : [A -> Type(0)], Q : [A -> Type(0)]) \(p : @(Same, A, P, Q)) \(a : A) let U1 = [@(P, a) -> @(Q, a)], U2 = [@(Q, a) -> @(P, a)] in let u1 = @(PJ1, U1, U2, @(p, a)), u2 = @(PJ2, U1, U2, @(p, a)) in @(ANDS, U2, U1, u2, u1); def SameTran = \(A : Type(0)) \(P : [A -> Type(0)], Q : [A -> Type(0)], R : [A -> Type(0)]) \(p : @(Same, A, P, Q), q : @(Same, A, Q, R)) \(a : A) let U1 = [@(P, a) -> @(Q, a)], U2 = [@(Q, a) -> @(P, a)], V1 = [@(Q, a) -> @(R, a)], V2 = [@(R, a) -> @(Q, a)], W1 = [@(P, a) -> @(R, a)], W2 = [@(R, a) -> @(P, a)] in let u1 = @(PJ1, U1, U2, @(p, a)), u2 = @(PJ2, U1, U2, @(p, a)), v1 = @(PJ1, V1, V2, @(q, a)),
Type Theory
43
v2 = @(PJ2, V1, V2, @(q, a)) in let w1 = \(r : @(P, a)) @(v1, @(u1, r)), w2 = \(r : @(R, a)) @(u2, @(v2, r)) in @(ANDS, W1, W2, w1, w2);
The collection of all subsets of the given set P will be denoted as @(PowerSet, A, P). def PowerSet = \(A : Type(0)) \(P : [A -> Type(0)], Q : [A -> Type(0)]) @(Include, A, Q, P);
Part II
Modeling Dataflow in PowerEpsilon
44
Chapter 3
Synchronous Dataflow Model 3.1
Reactive Systems
Reactive systems have been defined as computing systems which continuously interact with a given physical environment, when this environment is unable to synchronize logically with the system (for instance it cannot wait). Response times of the system must then meet requirements induced by the environment.
3.1.1
Transformational and Interactive Systems Versus Reactive Systems
This class of systems has been proposed so as to distinguish them from • Transformational systems – i.e., classical programs whose data are available at their beginning, and which provide results when terminating; and • Interactive systems which interact continuously with environments that possess synchronization capabilities (for instance, operating systems). Reactive systems apply mainly to automatic process control and monitoring, and signal processing, – but also to systems such as communication protocols and man-machine interfaces when required response times are very small. In synchronous programming, we understand it in a more restricted way, distinguishing between “interactive” and “reactive” systems: • Interactive systems permanently communicate with their environment, but at their own speed. They are able to synchronize with their environment, i.e., making it wait. Concurrent processes considered in operating systems or in database management, are generally interactive. 45
Dataflow Model
46
• Reactive systems, in our meaning, have to react to an environment which cannot wait. Typical examples appear when the environment is a physical process.
3.1.2
Features of Reactive Systems
Generally, these systems share some important features: • Parallelism. First, their design must take into account the parallel interaction between the system and its environment. Second, their implementation is quite often distributed for reasons of performance, fault tolerance, and functionality communication protocols for instance). Moreover, it may be easier to imagine a system as comprised of parallel modules cooperating to achieve a given behaviors, even if it is to be implemented in a centralized way. • Time constraints. This include input frequencies and input/output response times. As said above, these constraints are induced by the environment, and should be imperatively satisfied. Therefore, these should be specified, taken into account in the design, and verified as an important item of the system’s correctness. • Dependability. Most of these systems are highly critical ones, and this may be their most important feature. Just think of a design error in a nuclear plant control system, and in a commercial aircraft flight control system! This domain of application requires very careful design and verification methods and it may be one of the domains where formal methods should be used with higher priority; design methods and tools that support formal methods should be chosen even if these imply certain limitations.
3.2 3.2.1
Synchronous Model Synchronous versus Asynchronous Systems
A system that has the property of always responding to a message within a known finite bound if it is working is said to be synchronous. A system not having this property is said to be asynchronous. It should be intuitively clear that asynchronous systems are going to be harder to deal with than synchronous ones. If a processor can send a message and know that the absence of a reply within T seconds means that the intended recipient has failed, it can take corrective action. If there is no upper limit to how long the response might take, even determining whether there has been a failure is going to be a problem.
Dataflow Model
3.2.2
47
Why Synchronous Modeling
Most programming tools used in designing reactive systems are not satisfactory. • Traditional methods for using assembly languages for reactive system design can be considered dangerous though they are widely used for reasons of code efficiency. • Other methods include the use of classical languages for programming sequential tasks that coordinate and synchronize using services provided by a real-time operating systems such as pSOS+, Chorus and VxWorks, and the use of parallel languages that provide their own real-time communication services such as Ada. Even the later, which seems more promising, has been criticized since the services being provided are low level; it does not allow programs to be easily designed and validated, while appears to be rather expensive at run time. Synchronous languages have been proposed in order to deal with these problems. Synchronous languages provide “idealized” primitives allowing programmers to think of their programs as reacting instantaneously to external events. Thus each internal event of a program takes place at a known time with respect to the history of external events. This feature, together with the limitation to deterministic constructs, results in deterministic programs from both functional and temporal points of view.
3.2.3
Synchronous Hypothesis
In practice, the synchronous hypothesis is to assume that the program is able to react to an external event, before any further event occurs. If it is possible to check that this hypothesis holds for given program and environment, then this ideal behavior represents a sensible abstraction.
3.2.4
Synchronous Programming
3.2.4.1
Event-Driven Scheme
All control engineers know a simple way to implement a reactive system by a single loop program shown as follows: for each input_event do end This program scheme is “event driven” since each reaction is trigged by a input event.
Dataflow Model
3.2.4.2
48
Sampling Scheme
The following shows an even simpler and more common scheme, which consists in periodically sampling the inputs. for each input_event do end This “sampling” scheme is mainly used in numeric systems which solve, e.g., systems of differential equations. 3.2.4.3
State Transition Automata
These two schemes do not deeply differ, but they correspond to different intuitive points of view. In both cases, the program typically implements an automaton: the states are the valuations of the memory, and each reaction corresponds to a transition of the automaton. Such a transition may involve many computations, which, from the automaton point of view, are considered atomic (i.e., input changes are only taken into account between two reactions). This is the essence of the synchronous paradigm, where such a reaction is often said to take no time. An atomic action is called an instant (logical time), and all the events occurring during such a reaction are considered simultaneous. Now, automata are useful tools – from their simplicity, expressive power, and efficiency –, but they are very difficult to design by hand. Synchronous languages aim at providing high level and modular construct to make the design of such an automaton easier. The basic construct that all these languages provide, is a notion of synchronous concurrency, inspired by Milner’s synchronous product [25]: in the sampling scheme, when automata are composed in parallel, a transition of the product is made of “simultaneous” transitions of all of them; in the event-driven scheme, some automata can stay idle, when not trigged by events coming either from the environment or from other automata. In any case, when participating in such a compound transition, each automaton considers the outputs of others as being part of its own inputs. This “instantaneous” communication is called the synchronous broadcast. The important point is that, in contrast with the asynchronous concurrency considered in asynchronous language like Ada, this synchronous product can preserve determinism, a highly desirable feature in reactive systems design. There are two fields where this synchronous model has been used for years: • In synchronous circuit design, it is the usual model of communicating Mealy machine (FSM). Most hardware description formalisms are natural-
Dataflow Model
49
ly synchronous, or contain a significant synchronous subset. As a matter of fact, the compilation and verification of synchronous programs borrow many techniques from circuit CAD. However, while hardware description languages can be directly used to describe the data part of a circuit, they are of little help in designing complex hardware controllers. This explains the success of synchronous imperative languages, like ESTEREL, in this field. • In control engineering, high level specification formalisms are often dataflow synchronous formalisms, inherited from earlier analog technology: differential or finite-difference equations, block-diagrams, analog networks. Interpreted in a discrete world, these models can be formalized using the dataflow paradigm. However, these formalisms are seldom used as programming languages, and automatic code generation is not available. On the other hand, more imperative languages used for programming automatic controllers (e.g., Sequential Function Charts) generally follow the same cyclic execution scheme.
3.2.5
Synchronous Languages
Statecharts is probably the first, and the most popular, formal language designed in the early eighties for the design of reactive systems. However, it was proposed more as a specification and design formalism, rather than as a programming language. Many features (synchronous product and broadcast) of the synchronous model are already present in Statecharts, but determinism is not ensured, and many semantic problems were raised. Almost at the same time, three programming languages were proposed by French academic groups: • ESTEREL is an imperative language developed at the “Ecole des Mines” and INRIA, in Sophia Antipolis. • Signal and LUSTRE are dataflow languages, respectively designed at INRIA (Rennes) and CNRS (Grenoble). Signal is more “event-driven”, while LUSTRE mainly corresponds to the “sampling” scheme. Also, following the formal definition of the synchronous model, a purely synchronous variant of the Statecharts was proposed: ARGOS. The ideas of ARGOS are currently used as a basis for a graphical version of ESTEREL, named SYNCCHARTS at the University of Nice.
3.3 3.3.1
Dataflow Model Dataflow Approach
One method for reliable programming is to use high level languages, i.e., languages that allow a natural expression of problems as programs. Within the
Dataflow Model
50
domain of reactive programming, many people are used with automatic control and electronic circuits; traditionally, these people model their systems by means of networks of operators transforming flows of data – gates, switches, analog devices – and from a higher level, by means of Boolean functions and transfer functions with block-diagram structures, and finally by means of systems of dynamical equations which capture the behavior of these networks. Such formalisms look quite similar to what computer scientists call “dataflow” systems [32, 25, 19]. Therefore data flow can be considered as a high level paradigm in that field. Furthermore, as a basis of a high level programming language, it possesses several advantages: • It is a functional model with its subsequent mathematical cleanness, and particularly with no complex side effects. This makes it well adapted to formal verification and safe program transformation, since functional relations over data flows may be seen as time invariant properties. Also, reuse is made easier, which is an interesting feature for reliable programming concerns. • It is a parallel model, where any sequencing and synchronization constrains arise from data dependencies. This is a nice feature which allows the natural derivation of parallel implementations. It is also interesting to notice that, in the above application domain, people were accustomed to parallelism, at much earlier times than in other areas in computer science.
3.3.2
Control Flow versus Data Flow
The differences between the imperative and dataflow programming languages are more profound than just the differences between two kinds of languages. The designs of the two kind of languages reflect the structure or “architecture” of the machinery which is used to implement the language. The different approaches of imperative and dataflow languages reflect two fundamentally different modes of computation, i.e., two very different ways of computing the results desired. 3.3.2.1
Control Flow Languages
Programs in an imperative language like Pascal are intended to be run on what are known as “von Neumann” machine (named after the mathematician John von Neumann). A von Neumann machine consists of a small processor attached to a big memory, this memory being a vast collection of storage locations. Data items are fetched one by one from the memory, are sent to the central processing unit (CPU) in which the actual computation is performed, and then the results are returned one by one to their “cell”. The basic philosophy of the von Neumann machine is that data is ‘normally” at rest. The data items are
Dataflow Model
51
safely locked up in individual cells and are processed one at a time. The CPU is controlled by a program which is itself usually stored in the memory. 3.3.2.2
Dataflow Languages
The “dataflow” model is based on the principle of processing the data while it is in motion, ‘flowing’ through a dataflow network. A dataflow network is a system of nodes and processing stations connected by a number of communication channels or arcs. The dataflow approach to program construction has proved to be very effective. The great advantage of the dataflow approach is that ‘modules’ interact in a very simple way, through the pipelines connecting them. The modules are well insulated from each other in the sense that internal operation of one cannot effect that of another. This is not the case in ordinary programming languages like C and Pascal; the body of a procedure can contain assignments to parameters and even to global variables (variables externa to the procedure and not in the parameter list). As a result, the modules (procedures) in an apparently simple C program can pass information to each other in a very complicated manner. The dataflow approach to programming has already been tested in practice. The “shell” which language of the popular UNIX operating system is based on the concepts of “pipeline” and “filter”. A UNIX pipeline is essentially a linear dataflow network (i.e., without loops and without branching). The form of dataflow provided in UNIX is therefore limited, but nevertheless it is still very powerful. The interface between filters is very ‘narrow’ (a stream of characters) and so it is easy to combine them without the danger of interference and side effects. Very often a problem can be solved entirely within the shell language itself, without the need to “descend” into a ‘high-level’ language like C. The family of dataflow languages can be classified into three categories: • The languages (in particular the “single-assignment” languages) are heavily influenced by the von Neumann view of computation and are therefore imperative dataflow languages. • There are also a number of pure nonprocedural languages (basically variantions on LISP) but they are oriented towards the recursive function style of programming, rather than towards dataflow. • The third category are languages purely designed for dataflow programming. The typical languages are Lucid and SCADE/LUSTRE. The former is an academic language designed by William W. Wadge and Edward A. Ashcroft and the later is a commercial language designed by Verilog in France.
Dataflow Model
3.4
52
Synchronous Dataflow Model
It may thus seem appealing to develop a dataflow approach to reactive programming. However, up until now dataflow has been thought of as essentially asynchronous, whereas a synchronous approach seems necessary to tackle the problem of time, for instance by relating time with the index of data in flows. There are basically two factors for designing a synchronous dataflow languag: • It should restrict dataflow systems to only those that can be implemented as bounded memory automata-like programs. • It should take the advantage of the approach in developing techniques of formal verification. The idea is to consider a synchronous dataflow language as a specification language.
3.5
Continuously Operating Programs
Probably the most unconventional idea in dataflow model is that computations do not terminate. A conventional program which runs forever given a particular input is usually considered to be in a failure mode (e.g., in an infinite loop) caused by errors in the program or in its data. In actual fact, however, the view that infinite or nonterminating computations are inheritably undesirable is not very realistic. In most engineering applications a device (a bridge, a router, a telephone circuit, a pump or a car) is considered to have failed when it ceases to perform the activity for which it was designed. The better the design and manufacture of a product, the longer its lifetime. The ‘ideal’ product would therefore last forever, i.e., it would be continuously operating device. There area a few exceptions (e.g., explosive weapons), but in general engineers are concerned with building objects which operate indefinitely. It is certainly true that some programs are meant to be used only once (say to compute the first million primes), but genuine ‘one-off’ efforts are very rare. Most supposedly finite programs (such as compilers) are intended to be run repeatedly (and indefinitely) on different sets of data. Sometimes a program is devoted to maintaining a single data structure, such as a master file, which slowly evolves over a period of time. Finally, there are more and more examples of “real-time” systems (operating systems, process control systems, air traffic control systems) which must be continuously operating in a very literal sense. Some software engineers now advocate that even supposedly conventional programs are best thought of as running uninterruptedly.
Dataflow Model
3.6
53
The Problems with the Imperative Approach
In recent two decades the imperative approach to programming languages has shown increasing signs of strain. The conventional ‘mainstream’ programming languages are now being used for tasks which are far more complicated than even before, and by far more people than ever before. The first von Neumann machines (those produced in the early 1950s) could perform only a few hundred operations (such as additions) per second. A program was usually the work of a single programmer and was considered big if it was more than a few dozen lines long. Modern machines can now perform millions of operations per second. Some current software systems involve millions of lines of code, the product of years of labor on the part of hundreds of programmers and systems analysts. The basic principles of the von Neumann languages and machines have retained unchanged since 1952 – or at least since the introduction of ALGOL in 1960.
3.6.1
Complexity of Imperative Languages
The most obvious sign that something is wrong with the von Neumann approach is the explosive and exponential growth in the complexity of these machines and languages – especially the languages.
3.6.2
Lack of Precise Specifications
A second, related problem with the von Neumann languages is their lack of precise specifications. The manuals usually give the syntax (what programs are allowed) unambiguously and in great detail but give only an approximate and informal treatment of semantics (what the programs do). The supposedly exhaustive telephone book manuals describe what happens only in the most common cases, and then only in very informal language. Any really complicated question (especially one involving the interaction of different features) must be referred to the local compiler – or to a local ‘guru’ who knows the compiler well. Programmers often write short experimental programs to discover how the language works.
3.6.3
Unreliability
The third symptom of the von Neumann disease is an evitable consequence of the first two: unreliability. The programs do not work! It is almost impossible to write an imperative program longer than a dozen lines or so without introducing some kind of error. These “bugs” prove to be extremely hard to locate, and programmers can easily end up spending far more time in “debugging” a program than they spent in designing and coding it. Error-free software is, as
Dataflow Model
54
a result, enormously expensive to produce. In a large system it is practically impossible to remove all the bugs. Some of the subtler errors can remain dormant for years (say, in a seldom used part of the program) and then suddenly ‘come to life’ with catastrophic results. Of course human beings will always make mistakes, no matter what language us used. But the features that have been found by experience to be most error prone (goto statements, side effects, aliasing) are suggested naturally by the imperative approach and are difficult to do without.
3.6.4
Inefficiency
Finally, we should mention a fourth of von Neumann systems, one which has only recently become apparent but which may prove to be the most serious of all: they are efficient.
3.7 3.7.1
PowerEpsilon as a Dataflow Language Infinitary Programming
Based on higher-order λ-calculus, PowerEpsilon supports the infinitary programming. It means that the objects represented by PowerEpsilon may be infinite. 3.7.1.1
Defining Dataflow as a Strong Sum
In fact, we will be able to define the infinite dataflow under the framework of PowerEpsilon. dec Stream : !(T : Prop) Type(0); def Stream = \(T : Prop) ?(t : T) @(Stream, T); So a dataflow object of type T may be represented as where a : and b : @(Stream, T).
3.7.2
T
Lazy Evaluation
The operational semantics of PowerEpsilon is based on ‘lazy’ evaluation. The lazy evaluation techniques are more faithful to the mathematical semantics. It also make the evaluation of infinite streams possible.
Dataflow Model
3.7.3
55
Formal Reasoning of Real-Time Programs
Since PowerEpsilon is both a programming language and a mathematical theorem proving language. It will be possible to use PowerEpsilon to verify the most important properties of a safety critical real-time systems in order to make sure that the systems designed will satisfy the real-time requirements.
Chapter 4
Modeling of Dataflow Programming in PowerEpsilon 4.1
Data Structure Stream
A stream of type T is represented as a strong sum in terms of PowerEpsilon described as follows: dec Stream : !(T : Prop) Type(0); def Stream = \(T : Prop) ?(t : T) @(Stream, T); The dataflow model can be compared with the message passing mechanism in imperative concurrent programing models where streams can be compared with the message queues.
4.2
Operators on Streams
4.2.1
Constructors of Streams
We first need to define an empty element SNIL which is specified as an end of streams.
56
Model of Dataflow
57
dec SNIL : !(T : Prop) @(Stream, T); We then need a constructor which takes a element t of type T and a stream s of type @(Stream, T) and yields a new stream. def SCONS = \(T : Prop) \(t : T, s : @(Stream, T)) ; Furthermore, we need a buttom element SERR as an exceptional object. dec SERR : !(T : Prop) T;
4.2.2
Testing Functions
Two functions are needed to test whether a given stream is empty or two given streams are equal. dec IS_SNIL : !(T : Prop) [@(Stream, T) -> Bool]; dec SEQ : !(T : Prop) [@(Stream, T) -> @(Stream, T) -> Bool];
4.2.3
Selection Functions
Two selection functions are defined. The function SHEAD takes a stream as input and returns its first element as output. The function STAIL takes a stream as input and returns the subsequent stream except the first element. In other words, STAIL throws away the first value of a given stream. def SHEAD = \(T : Prop, s : @(Stream, T)) @(GIF_THEN_ELSE, T, @(IS_SNIL, T, s), @(SERR, T), @(FST, s)); def STAIL = \(T : Prop, s : @(Stream, T)) @(GIF_THEN_ELSE0, @(Stream, T),
Model of Dataflow
58
@(IS_SNIL, T, s), @(SNIL, T), @(SND, s));
4.3 4.3.1
Other Operators on Streams Interleaving Operator
Two streams can be merged together by an interleaving operation. dec SINTERLEAVE : !(T : Prop) [@(Stream, T) -> @(Stream, T) -> @(Stream, T)]; def SINTERLEAVE = \(T : Prop) \(s1 : @(Stream, T), s2 : @(Stream, T)) @(GIF_THEN_ELSE0, @(Stream, T), @(IS_SNIL, T, s1), s2, @(GIF_THEN_ELSE0, @(Stream, T), @(IS_SNIL, T, s2), s1, let h1 = @(FST, s1), h2 = @(FST, s2), t1 = @(SND, s1), t2 = @(SND, s2) in let ns = @(SINTERLEAVE, T, t1, t2) in )); This is resemble to interleave the execution of two running processes.
4.3.2
Hiding Operator
An elements in a stream may be hiden according to a certain condition defined by function E. dec SHIDE : !(T : Prop) [T -> @(Stream, T) -> [T -> T -> Bool] -> @(Stream, T)]; def SHIDE = \(T : Prop)
Model of Dataflow
59
\(e : T, s : @(Stream, T), E : [T -> T -> Bool]) @(GIF_THEN_ELSE0, @(Stream, T), @(IS_SNIL, T, s), s, let h = @(FST, s), t = @(SND, s) in let ns = @(SHIDE, T, e, t, E) in @(GIF_THEN_ELSE0, @(Stream, T), @(E, e, h), ns, ));
4.3.3
Parallel Operator
4.3.3.1
Strong Parallel Operator
There are several definition of parallel operators on streams. The first one is represented as a strong sum. def SPARL2 = \(T : Prop) \(s1 : @(Stream, T), s2 : @(Stream, T)) ;
4.3.3.2
Weak Parallel Operator
We may also represent a parallel operator as a logical product ANDS. def WSPAR2 = \(T : Prop) \(s1 : @(Stream, T), s2 : @(Stream, T)) @(ANDS, @(Stream, T), @(Stream, T), s1, s2);
4.3.3.3
Strong Parallel Operator for Different Streams
The parallel operator can also define on two streams with different types. def SPARLDIF2 = \(T1 : Prop, T2 : Prop) \(s1 : @(Stream, T1), s2 : @(Stream, T2)) ;
Model of Dataflow
4.3.3.4
60
Weak Parallel Operator for Different Streams
Instead of using strong sum representing the strong parallel operator, we may use the weak parallel operator for two streams with different types. def WSPARDIF2 = \(T1 : Prop, T2 : Prop) \(s1 : @(Stream, T1), s2 : @(Stream, T2)) @(ANDS, @(Stream, T1), @(Stream, T2), s1, s2);
4.3.3.5
Or Parallel Operator
The or-parallel operator based on type Or is also derived. def WSSUM21 = \(T1 : Prop, T2 : Prop) \(s1 : @(Stream, T1)) @(INJ1, @(Stream, T1), @(Stream, T2), s1); def WSSUM22 = \(T1 : Prop, T2 : Prop) \(s2 : @(Stream, T2)) @(INJ2, @(Stream, T1), @(Stream, T2), s2);
4.3.4
Split Operator
4.3.4.1
Strong Split Operator
Splitting a stream into two according to the condition E and the result is represented as a strong sum. dec SSPLIT : !(T : Prop) [@(Stream, T) -> [T -> Bool] -> ?(x : @(Stream, T)) @(Stream, T)]; def SSPLIT = \(T : Prop) \(s : @(Stream, T), E : [T -> Bool]) @(GIF_THEN_ELSE1, ?(x : @(Stream, T)) @(Stream, T), @(IS_SNIL, T, s), , let h = @(FST, s), t = @(SND, s) in let ns = @(SSPLIT, T, t, E) in let s1 = @(FST, ns), s2 = @(SND, ns) in
Model of Dataflow
61
@(GIF_THEN_ELSE1, ?(x : @(Stream, T)) @(Stream, T), @(E, h), , ));
4.3.4.2
Weak Split Operator
Splitting a stream into two according to the condition E and the result is represented as a product type. dec WSPLIT : !(T : Prop) [@(Stream, T) -> [T -> Bool] -> @(And, @(Stream, T), @(Stream, T))]; def WSPLIT = \(T : Prop) \(s : @(Stream, T), E : [T -> Bool]) @(GIF_THEN_ELSE, @(And, @(Stream, T), @(Stream, T)), @(IS_SNIL, T, s), @(ANDS, @(Stream, T), @(Stream, T), s, s), let h = @(FST, s), t = @(SND, s) in let ns = @(WSPLIT, T, t, E) in let s1 = @(PJ1, @(Stream, T), @(Stream, T), s2 = @(PJ2, @(Stream, T), @(Stream, T), @(GIF_THEN_ELSE, @(And, @(Stream, T), @(Stream, T)), @(E, h), @(ANDS, @(Stream, T), @(Stream, T), , s2), )));
Fairness Split Operator
Sometimes it is required that a stream must be splitted fairly. This can be done by a fairness split operator FSPLIT. dec FSPLIT : !(T : Prop) [@(Stream, T) -> @(And, @(Stream, T), @(Stream, T))]; def FSPLIT = \(T : Prop) \(s : @(Stream, T)) @(GIF_THEN_ELSE, @(And, @(Stream, T), @(Stream, T)), @(IS_SNIL, T, s), @(ANDS, @(Stream, T), @(Stream, T), s, s),
Model of Dataflow
62
let h = @(FST, s), t = @(SND, s) in @(GIF_THEN_ELSE, @(And, @(Stream, T), @(Stream, T)), @(IS_SNIL, T, t), @(ANDS, @(Stream, T), @(Stream, T), , t), let k = @(FST, t), u = @(SND, t) in let ns = @(FSPLIT, T, u) in let s1 = @(PJ1, @(Stream, T), @(Stream, T), ns), s2 = @(PJ2, @(Stream, T), @(Stream, T), ns) in @(ANDS, @(Stream, T), @(Stream, T), , )));
4.3.5
Followed by Operator
The followed-by operator FBY is another constructor of streams. dec FBY : !(T : Prop) [T -> @(Stream, T) -> @(Stream, T)]; def FBY = \(T : Prop, t : T, s : @(Stream, T)) ; The following followed-by operator FBY2 is actually a renaming of the stream constructor SCONS. dec FBY2 : !(T : Prop) [T -> @(Stream, T) -> @(Stream, T)]; def FBY2 = \(T : Prop, t : T, s : @(Stream, T)) ;
4.3.6
WHENEVER Operator
It corresponds to a filter which inputs value for t and s at the same rate and outputs the value of t if the value of s is true. Actually, if the value input for s is false, the value input for t is not even looked at; an filter is not “strict”. The operational reality of non-strict filters (i.e., filters which can ‘skip’ parts of their input) is questionable, but they are useful conceptually.
Model of Dataflow
63
dec WHENEVER : !(T : Prop) [@(Stream, T) -> @(Stream, Bool) -> @(Stream, T)]; def WHENEVER = \(T : Prop, t : @(Stream, T), s : @(Stream, Bool)) @(GIF_THEN_ELSE0, @(Stream, T), @(FST, s), @(FBY2, T, @(FST, t), @(WHENEVER, T, @(SND, t), @(SND, s))), @(WHENEVER, T, @(SND, t), @(SND, s)));
4.3.7
UPON Operator
This corresponds to a (non-strict) filter which inputs values for t and s and outputs t and repeats it, while inputting values for s (not t), until the value input for s is true, at which point it inputs a value for t, and outputs that repeatedly, while inputting value for s, until s is true again and so on. dec UPON : !(T : Prop) [@(Stream, T) -> @(Stream, Bool) -> @(Stream, T)]; def UPON = \(T : Prop, t : @(Stream, T), s : @(Stream, Bool)) let u = @(GIF_THEN_ELSE0, @(Stream, T), @(FST, s), @(UPON, T, @(SND, t), @(SND, s)), @(UPON, T, t, @(SND, s))) in @(FBY2, T, @(FST, t), u);
4.3.8
ASA Operator
Sometimes we are interested only in the first of these values, i.e., the value which t has when s is first true. def ASA = \(T : Prop, t : @(Stream, T), s : @(Stream, Bool)) @(FST, @(WHENEVER, T, t, s));
4.3.9
Arrow Operator
@(FBYS, T, f, g) is defined as an arrow operator for stream f and g, the result stream is obtained by merging the first element of f and the rest elements of g
Model of Dataflow
64
together forming a new stream. dec FBYS : !(T : Prop) [@(Stream, T) -> @(Stream, T) -> @(Stream, T)]; def FBYS = \(T : Prop, f : @(Stream, T), g : @(Stream, T)) ; @(FBYS2, T, f, g) is defined as an arrow operator for stream f and g, the result stream is obtained by merging the first element of f and g together forming a new stream. dec FBYS2 : !(T : Prop) [@(Stream, T) -> @(Stream, T) -> @(Stream, T)]; def FBYS2 = \(T : Prop, f : @(Stream, T), g : @(Stream, T)) ;
4.3.10
Pipe Operator
The pipe operator is very similar to the function composition operator in an ordinary functional programming language. def SPIPE = \(T1 : Prop, T2 : Prop, T3 : Prop) \(f : [@(Stream, T1) -> @(Stream, T2)], g : [@(Stream, T2) -> @(Stream, T3)]) \(s : @(Stream, T1)) @(g, @(f, s));
4.3.11
Conditional Operator
As in most conventional programming languages, a conditional operator is defined for dataflow programming as follows: dec SIFTHENELSE : !(T : Prop) [@(Stream, Bool) -> @(Stream, T) -> @(Stream, T) -> @(Stream, T)]; def SIFTHENELSE = \(T : Prop)
Model of Dataflow
\(b : @(Stream, Bool), s : @(Stream, T), t : @(Stream, T)) let hb = @(FST, b), tb = @(SND, b), hs = @(FST, s), ts = @(SND, s), ht = @(FST, t), tt = @(SND, t) in @(GIF_THEN_ELSE0, @(Stream, T), hb, , );
65
Chapter 5
Modeling of Synchronous Dataflow Programming in PowerEpsilon 5.1
Flows and Clocks
The synchronous dataflow approach introduces a temporal dimension into the dataflow model. A natural way of doing this is to relate the time and the data in the flows. Indeed, the handled entities can be naturally interpreted as a timed function. A basic entity (or flow) is a pair formed of • A sequence of value of a given type; • A clock representing a sequence of instances (on discrete time scale). A flow has the ith value of its sequence at ith instance of its clock. The temporal dimension therefore underlies all descriptions of this type of model and we have now a new stream definition as follows: def TStream = \(T : Prop) let W = @(And, T, Bool) in ?(w : W) @(Stream, W);
5.2
Operators on TStream
dec TSNIL : !(T : Prop) @(TStream, T);
66
Model of Synchronous Dataflow
67
dec TSERR : !(T : Prop) @(ANDS, T, Bool, @(SERR, T), FF); dec TSNULL : !(T : Prop) @(ANDS, T, Bool, @(SNULL, T), FF); dec IS_TSNIL : !(T : Prop) [@(TStream, T) -> Bool]; dec TSEQ : !(T : Prop) [@(TStream, T) -> @(TStream, T) -> Bool];
5.3 5.3.1
Data Flow Operators Interleaving Operator
The first interleaving operator TSINTERLEAVE is to merge the two streams into one as same as it has been defined in the pure dataflow model, the timing here is re-scheduled. dec TSINTERLEAVE : !(T : Prop) [@(TStream, T) -> @(TStream, T) -> @(TStream, T)]; def TSINTERLEAVE = \(T : Prop) \(s1 : @(TStream, T), s2 : @(TStream, T)) @(GIF_THEN_ELSE0, @(TStream, T), @(TSEQ, T, s1, @(TSNIL, T)), s2, @(GIF_THEN_ELSE0, @(TStream, T), @(TSEQ, T, s2, @(TSNIL, T)), s1, let h1 = @(FST, s1), h2 = @(FST, s2), t1 = @(SND, s1), t2 = @(SND, s2) in let ns = @(TSINTERLEAVE, T, t1, t2) in ));
However, this definition violate the principles inherited in the synchronous approach and is therefore not to be suitable for any synchronous models. For the second interleaving operator TSINTERLEAVE2, the timing is not slow down, but a different timing stamp is given. The new timing stamp is set if the timing stamps of both input streams are true. dec TSINTERLEAVE2 : !(T : Prop) [@(TStream, T) -> @(TStream, T) -> @(TStream, @(And, T, T))];
Model of Synchronous Dataflow
dec TSDOUBLE1 : !(T : Prop) [@(TStream, T) -> @(TStream, @(And, T, T))]; def TSDOUBLE1 = \(T : Prop, s : @(TStream, T)) let h = @(FST, s), t = @(SND, s) in let hh = @(PJ1, T, Bool, h), ht = @(PJ2, T, Bool, h) in let nt = @(ANDS, @(And, T, T), Bool, @(ANDS, T, T, hh, @(SNULL, T)), ht) in let ns = @(TSDOUBLE1, T, t) in ; dec TSDOUBLE2 : !(T : Prop) [@(TStream, T) -> @(TStream, @(And, T, T))]; def TSDOUBLE2 = \(T : Prop, s : @(TStream, T)) let h = @(FST, s), t = @(SND, s) in let hh = @(PJ1, T, Bool, h), ht = @(PJ2, T, Bool, h) in let nt = @(ANDS, @(And, T, T), Bool, @(ANDS, T, T, @(SNULL, T), hh), ht) in let ns = @(TSDOUBLE1, T, t) in ; def TSINTERLEAVE2 = \(T : Prop) \(s1 : @(TStream, T), s2 : @(TStream, T)) @(GIF_THEN_ELSE0, @(TStream, @(And, T, T)), @(TSEQ, T, s1, @(TSNIL, T)), @(TSDOUBLE2, T, s2), @(GIF_THEN_ELSE0, @(TStream, @(And, T, T)), @(TSEQ, T, s2, @(TSNIL, T)), @(TSDOUBLE1, T, s1), let h1 = @(FST, s1), h2 = @(FST, s2), t1 = @(SND, s1), t2 = @(SND, s2) in let hh1 = @(PJ1, T, Bool, h1), ht1 = @(PJ2, T, Bool, h1), hh2 = @(PJ1, T, Bool, h2), ht2 = @(PJ2, T, Bool, h2) in let nt = @(ANDS,
68
Model of Synchronous Dataflow
69
@(And, T, T), Bool, @(ANDS, T, T, hh1, hh2), @(AND, ht1, ht2)) in let ns = @(TSINTERLEAVE2, T, t1, t2) in ));
For the third interleaving operator TSINTERLEAVE2, the timing is not slow down, but a different timing stamp is given. The new timing stamp is set if one of the timing stamps of input streams are true. dec TSINTERLEAVE3 : !(T : Prop) [@(TStream, T) -> @(TStream, T) -> @(TStream, @(And, T, T))]; def TSINTERLEAVE3 = \(T : Prop) \(s1 : @(TStream, T), s2 : @(TStream, T)) @(GIF_THEN_ELSE0, @(TStream, @(And, T, T)), @(TSEQ, T, s1, @(TSNIL, T)), @(TSDOUBLE2, T, s2), @(GIF_THEN_ELSE0, @(TStream, @(And, T, T)), @(TSEQ, T, s2, @(TSNIL, T)), @(TSDOUBLE1, T, s1), let h1 = @(FST, s1), h2 = @(FST, s2), t1 = @(SND, s1), t2 = @(SND, s2) in let hh1 = @(PJ1, T, Bool, h1), ht1 = @(PJ2, T, Bool, h1), hh2 = @(PJ1, T, Bool, h2), ht2 = @(PJ2, T, Bool, h2) in let nt = @(ANDS, @(And, T, T), Bool, @(ANDS, T, T, hh1, hh2), @(OR, ht1, ht2)) in let ns = @(TSINTERLEAVE3, T, t1, t2) in ));
5.3.2
Hiding Operator
An elements in a stream may be hidden according to a certain condition defined by function E. dec TSHIDE : !(T : Prop) [T -> @(TStream, T) -> [T -> T -> Bool] -> @(TStream, T)];
Model of Synchronous Dataflow
70
def TSHIDE = \(T : Prop) \(e : T, s : @(TStream, T), E : [T -> T -> Bool]) @(GIF_THEN_ELSE0, @(TStream, T), @(TSEQ, T, s, @(TSNIL, T)), s, let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, T, Bool, t), tt = @(PJ2, T, Bool, t) in let ns = @(TSHIDE, T, e, r, E) in @(GIF_THEN_ELSE0, @(TStream, T), @(E, e, ht), , ));
5.3.3
Parallel Operator
5.3.3.1
Strong Parallel Operator
The strong parallel operator is defined by strong sum construct in PowerEpsilon. In this case, the second stream may be dependent on this first one. def TSPARL2 = \(T : Prop) \(s1 : @(TStream, T), s2 : @(TStream, T)) ;
5.3.3.2
Weak Parallel Operator
The weak parallel operator is defined by logical ∧ construct in PowerEpsilon. In this case, the two streams must be independent with each others. def TWSPAR2 = \(T : Prop) \(s1 : @(TStream, T), s2 : @(TStream, T)) @(ANDS, @(TStream, T), @(TStream, T), s1, s2);
5.3.3.3
Strong Parallel Operator for Different Streams
We may also define the strong parallel operator on the streams with different types.
Model of Synchronous Dataflow
71
def TSPARLDIF2 = \(T1 : Prop, T2 : Prop) \(s1 : @(TStream, T1), s2 : @(TStream, T2)) ;
5.3.3.4
Weak Parallel Operator for Different Streams
The weak parallel operator can also be defined for the streams with different data types. def TWSPARDIF2 = \(T1 : Prop, T2 : Prop) \(s1 : @(TStream, T1), s2 : @(TStream, T2)) @(ANDS, @(TStream, T1), @(TStream, T2), s1, s2);
5.3.4
Or Parallel Operator
The weak sum operator can be defined for the streams of logical sum of different types. This is also a nondeterministic operator. def TWSSUM21 = \(T1 : Prop, T2 : Prop) \(s1 : @(TStream, T1)) @(INJ1, @(TStream, T1), @(TStream, T2), s1); def TWSSUM22 = \(T1 : Prop, T2 : Prop) \(s2 : @(TStream, T2)) @(INJ2, @(TStream, T1), @(TStream, T2), s2);
5.3.5
Split Operator
5.3.5.1
Strong Split Operator
Splitting a stream into two according to the condition E. These two streams are built as a strong sum. dec TSSPLIT : !(T : Prop) [@(TStream, T) -> [T -> Bool] -> ?(x : @(TStream, T)) @(TStream, T)]; def TSSPLIT =
Model of Synchronous Dataflow
72
\(T : Prop) \(s : @(TStream, T), E : [T -> Bool]) @(GIF_THEN_ELSE1, ?(x : @(TStream, T)) @(TStream, T), @(TSEQ, T, s, @(TSNIL, T)), , let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, T, Bool, t), tt = @(PJ2, T, Bool, t) in let ns = @(TSSPLIT, T, r, E) in let s1 = @(FST, ns), s2 = @(SND, ns) in @(GIF_THEN_ELSE1, ?(x : @(TStream, T)) @(TStream, T), @(E, ht), , ));
5.3.5.2
Weak Split Operator
Splitting a stream into two according to the condition E. These two streams are built as a product. dec TWSPLIT : !(T : Prop) [@(TStream, T) -> [T -> Bool] -> @(And, @(TStream, T), @(TStream, T))]; def TWSPLIT = \(T : Prop) \(s : @(TStream, T), E : [T -> Bool]) @(GIF_THEN_ELSE, @(And, @(TStream, T), @(TStream, T)), @(TSEQ, T, s, @(TSNIL, T)), @(ANDS, @(TStream, T), @(TStream, T), s, s), let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, T, Bool, t), tt = @(PJ2, T, Bool, t) in let ns = @(TWSPLIT, T, r, E) in let s1 = @(PJ1, @(TStream, T), @(TStream, T), ns), s2 = @(PJ2, @(TStream, T), @(TStream, T), ns) in @(GIF_THEN_ELSE, @(And, @(TStream, T), @(TStream, T)), @(E, ht), @(ANDS, @(TStream, T), @(TStream, T), ,
Model of Synchronous Dataflow
73
), @(ANDS, @(TStream, T), @(TStream, T), , )));
5.3.5.3
Fairness Split Operator
dec TFSPLIT : !(T : Prop) [@(TStream, T) -> @(And, @(TStream, T), @(TStream, T))]; def TFSPLIT = \(T : Prop) \(s : @(TStream, T)) @(GIF_THEN_ELSE, @(And, @(TStream, T), @(TStream, T)), @(TSEQ, T, s, @(TSNIL, T)), @(ANDS, @(TStream, T), @(TStream, T), s, s), let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, T, Bool, t), tt = @(PJ2, T, Bool, t) in @(GIF_THEN_ELSE, @(And, @(TStream, T), @(TStream, T)), @(TSEQ, T, r, @(TSNIL, T)), @(ANDS, @(TStream, T), @(TStream, T), , r), let k = @(FST, r), u = @(SND, r) in let hk = @(PJ1, T, Bool, k), tk = @(PJ2, T, Bool, k) in let ns = @(TFSPLIT, T, u) in let s1 = @(PJ1, @(TStream, T), @(TStream, T), ns), s2 = @(PJ2, @(TStream, T), @(TStream, T), ns) in @(ANDS, @(TStream, T), @(TStream, T), , )));
5.3.6
Pipe Operator
The pipe operator is very similar to the function composition operator in an ordinary functional programming language. def TSPIPE = \(T1 : Prop, T2 : Prop, T3 : Prop) \(f : [@(TStream, T1) -> @(TStream, T2)],
Model of Synchronous Dataflow
74
g : [@(TStream, T2) -> @(TStream, T3)]) \(s : @(TStream, T1)) @(g, @(f, s));
5.3.7
Conditional Operator
If X and Y are two streams of type T on the basic clock, and their sequences of values are respectively (x1 , x2 , . . . , xn , . . .) and (y1 , y2 , . . . , yn , . . .). If B is a stream of type Bool, and its sequence of values is (b1 , b2 , . . . , bn , . . .), then the expression @(TIFTHENELSE, T, B, X, Y) is a flow on the basic clock whose nt h value for any natural number n is @(GIF THEN ELSE, T, bn , xn , yn ) It is defined as follows: dec TIFTHENELSE : !(T : Prop) [@(TStream, Bool) -> @(TStream, T) -> @(TStream, T) -> @(TStream, T)]; def TIFTHENELSE = \(T : Prop) \(b : @(TStream, Bool), s : @(TStream, T), t : @(TStream, T)) let hb = @(FST, b), tb = @(SND, b), hs = @(FST, s), ts = @(SND, s), ht = @(FST, t), tt = @(SND, t) in @(GIF_THEN_ELSE0, @(TStream, T), @(PJ1, Bool, Bool, hb), , );
5.4 5.4.1
Temporal Operators PRE Operator
This is also called the delay operator. It allows a trace of the value of an expression to be kept from one cycle to another. During first cycle, the previous value is indefinite.
Model of Synchronous Dataflow
75
dec DO_TPRES : !(T : Prop) [@(TStream, T) -> T -> @(TStream, T)]; def DO_TPRES = \(T : Prop, s : @(TStream, T), x : T) let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, T, Bool, t), tt = @(PJ2, T, Bool, t) in ; def TPRES = \(T : Prop, s : @(TStream, T)) let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, T, Bool, t), tt = @(PJ2, T, Bool, t) in ;
5.4.2
FBY Operator
This another delay operator called followed-by operator. If x is an object of type T and s = (s1 , s2 , . . . , sn , . . . ), then @(TFBY, T, x, s) is the stream (x, s2 , . . . , sn , . . . ). dec TFBY : !(T : Prop) [T -> @(TStream, T) -> @(TStream, T)]; def TFBY = \(T : Prop, x : T, s : @(TStream, T)) let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, T, Bool, t), tt = @(PJ2, T, Bool, t) in ;
The following followed-by operator is not allowed in a strict synchronous model. dec TFBY2 : !(T : Prop) [T -> Bool -> @(TStream, T) -> @(TStream, T)]; def TFBY2 = \(T : Prop, x : T, b : Bool, s : @(TStream, T)) ;
Model of Synchronous Dataflow
76
dec TFBYN : !(T : Prop) [T -> Nat -> @(TStream, T) -> @(TStream, T)]; def TFBYN = \(T : Prop, x : T, n : Nat, s : @(TStream, T)) let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, T, Bool, t), tt = @(PJ2, T, Bool, t) in @(GIF_THEN_ELSE0, @(TStream, T), @(EQUAL, n, OO), s, );
Again, the following followed-by operator is not allowed in a strict synchronous framework. dec TFBYN2 : !(T : Prop) [T -> Bool -> Nat -> @(TStream, T) -> @(TStream, T)]; def TFBYN2 = \(T : Prop, x : T, b : Bool, n : Nat, s : @(TStream, T)) @(GIF_THEN_ELSE0, @(TStream, T), @(EQUAL, n, OO), s, );
5.4.3
Initialization Operator
dec INIT : !(T : Prop) [@(TStream, T) -> @(TStream, T) -> @(TStream, T)]; def INIT = \(T : Prop, f : @(TStream, T), g : @(TStream, T)) ;
The timing is changed in the following definition and therefore will be forbitten in the synchronous model. dec INIT2 : !(T : Prop) [@(TStream, T) -> @(TStream, T) -> @(TStream, T)]; def INIT2 = \(T : Prop, f : @(TStream, T), g : @(TStream, T))
Model of Synchronous Dataflow
77
;
5.4.4
Current Operator
It allows clocks to be handled: If E is an expression of clock C, then current (E) is an expression of the same clock as clock of C. Value of current (E) at each cycle of its clock is the value taken by E the last time that C was true. dec DO_CURRENT : !(T : Prop) [@(TStream, T) -> T -> @(Stream, T)]; def CURRENT = \(T : Prop, s : @(TStream, T)) @(DO_CURRENT, T, s, @(SNULL, T)); def DO_CURRENT = \(T : Prop) \(s : @(TStream, T), x : T) let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, T, Bool, t), tt = @(PJ2, T, Bool, t) in @(GIF_THEN_ELSE0, @(Stream, T), tt, , ); dec DO_TCURRENT : !(T : Prop) [@(TStream, T) -> T -> @(TStream, T)]; def TCURRENT = \(T : Prop, s : @(TStream, T)) @(DO_TCURRENT, T, s, @(SNULL, T)); def DO_TCURRENT = \(T : Prop) \(s : @(TStream, T), x : T) let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, T, Bool, t), tt = @(PJ2, T, Bool, t) in @(GIF_THEN_ELSE0, @(TStream, T), tt, , );
Model of Synchronous Dataflow
5.4.5
78
When Operator
E when C. If E is an expression and C a Boolean expression, E when C is an expression whose sequence is extracted from the sequence of E values by selecting only the values taken when C is true. Thus, the expression E when C does not have the same clock as E and C. dec TWHENDO : !(T : Prop) [@(TStream, T) -> @(TStream, Bool) -> @(TStream, T)]; def TWHENDO = \(T : Prop) \(s : @(TStream, T), b : @(TStream, Bool)) let hs = @(FST, s), ts = @(SND, s), hb = @(FST, b), tb = @(SND, b) in let hhs = @(PJ1, T, Bool, hs), ths = @(PJ2, T, Bool, hs) in @(GIF_THEN_ELSE0, @(TStream, T), @(PJ1, Bool, Bool, hb), , );
5.4.6
Principles for Timed Streams
• The newly computed values can only be appended at the end of a sequence of already computed values. • The change of basic timing cycle of a timed dataflow is not allowed. In other word, we don’t allow basic clock time intervals to be split into smaller ones. • The change of timing stamp of a timed dataflow is only allowed for a certain operators.
Chapter 6
Programming in Dataflow Languages The real difference between dataflow languages and imperative languages is the way in which problem and its solution (the program) are broken up into modules. In imperative languages the basic concepts are “storage” and “command”. Modules therefore represent commands to change the store, and the hierarchical approach means expressing complicated commands in terms of simpler ones. In dataflow languages, the basic concepts are those of “stream” and “filter”. A module in dataflow languages is therefore a filter, and the hierarchical approach involves building up complicated filters by “plumbing together’ simper ones. The dataflow languages and the imperative languages such as Pascal are, therefore, not just different ways of saying thing: they are different ways of say different things. PowerEpsilon and Pascal are based on different forms of computation, i.e., on different notions of what constitutions an algorithm. The difference can be summarized as follows: the Pascal programmer’s algorithm could be ‘drawn’ as a control flow chart, while the dataflow language programmer’s algorithm could be drawn as a data flow chart. The best way to convey the programming methodology appropriate for modeling dataflow programming in PowerEpsilon is to present a few example problems and their solutions.
6.1
Hamming’s Problem
The first problem which we will consider is that of producing the stream of all numbers of the form 2I 3J 5K in increasing order, and without repetitions. This stream should therefore begin 1, 2, 3, 4, 6, 8, 9, 10, 12, 15, 16, 18, . . .
79
Dataflow Programming
80
The problem is easy enough to state, but for the imperative programmer it is not all obvious how to proceed. In fact, the program as stated is extremely difficult to do in imperative programming language because we have demanded all the numbers. But even if we simplify it and demand (say) only the first thousand, there is still no self-evident solution. A good program is difficult to write (sizes of arrays have to be worked out) and can be quite complex. The dataflow languages could always ‘transcribe’ the imperative language solutions, but in this case (as in many others) it would be a mistake. There is a very easy dataflow solution to the problem, and the solution can be expressed naturally and elegantly in dataflow languages. The dataflow solution is based on the following observation: if h is the desired stream, then the streams 2 × h, 3 × h, 5 × h (the streams formed by multiplying the components of h by 2, 3 and 5 respectively) are substreams of h. Furthermore, any of the values of h other than 1 are values of at least one of the three substreams. In other words, the values of the streams h are 1 followed by the result of merging the 2×h, 3×h, 5×h. In PowerEpsilon we can express this fact with the equation def H = ; with MERGE a filter (to be defined) which produces the ordered merge of its two arguments. To complete our program, all we have to do is to find a definition for MERGE. The following recursive definition is the one that comes first to mind, especially for programmers who are used to recursive, LISP-style programming. dec MERGE : [@(Stream, Nat) -> @(Stream, Nat) -> @(Stream, Nat)]; def MERGE = \(x : @(Stream, Nat), y : @(Stream, Nat)) let a = @(FST, x), b = @(FST, y) in @(GIF_THEN_ELSE0, @(Stream, Nat), @(LESS, a, b), , @(GIF_THEN_ELSE0, @(Stream, Nat), @(LESS, b, a), ,
Dataflow Programming
81
));
The above definition is correct, but is not really satisfactory. It is not really in the spirit of dataflow. Dataflow language certainly allows such definitions, and they do have a dataflow interpretation, but this interpretation involves a dynamic, expanding dataflow net (with more and more nodes being generated ‘at run time’). A program who writes a definition like that has slipped into the mistake of viewing streams as static objects, like infinite lists. This is a very bad habit, one which people experienced in ‘functional programming’ can find hard to shake. Recursive definitions, when they are not appropriate, can be cumbersome and hard ti understand. They can also be inefficient on some kinds of dataflow machines and even on conventional machines. There is in fact a straightforward iterative way to merge two streams, one which is familiar to every applications programmer. To merge x and y you take the values of x until they exceed the next available value of y; then the values of y until they exceed the next available value of x; then the values of x, and so on. While the values of one stream are being taken (and passed on as output), the values of the other are ‘held up’. It is not hard to express this iterative algorithm in PowerEpsilon. We simply use the function UPON to ‘slow down’ the two streams. Recall that the values of @(UPON, x, p) are those x, except that the new values are ‘held up’ (and the previous one is repeated) as long as the corresponding value of p is false. (@(UPON, x, p) is the stream x with new values appearing upon the truth of p). If we let xx and yy be the ‘slowed down’ versions of x and y, respectively, the desired output is (at any given time) the least of the corresponding values of xx and yy. When this least value is xx, a new value of x must be brought in; which it is yy, a new value of y is required. The program definition of MERGE is therefore def MERGE = \(x : @(Stream, Nat), y : @(Stream, Nat)) lec xx : @(Stream, Nat), yy : @(Stream, Nat) in let xx = @(UPON, x, @(SLSEQ, xx, yy)), yy = @(UPON, y, @(SLSEQ, yy, xx)) in @(SIFTHENELSE, Nat, @(SLSEQ, xx, yy), xx, yy);
Our complete Hamming program is therefore dec HAMMING : @(Stream, Nat); def HAMMING = @(FBY2, Nat, ONE, @(MERGE,
Dataflow Programming
82
@(MERGE, @(STIMES, TWO, HAMMING), @(STIMES, THREE, HAMMING)), @(STIMES, FIVE, HAMMING)));
6.2
Prime Problem
The next problem which was consider is that of generating the stream of all primes. From the dataflow point of view, the strategy is obvious: generate the stream of natural numbers greater than 1 and pass it through a filter which discards those numbers which are not prime. dec PRIMESIN : [@(Stream, Nat) -> @(Stream, Nat)]; def PRIMESIN = \(l : @(Stream, Nat)) let h = @(FST, l), t = @(SND, l) in let ISPRIME = \(n : Nat) lec NODIVSIN : [@(Stream, Nat) -> Bool] in let NODIVSIN = \(s : @(Stream, Nat)) let q = @(FST, s), z = @(SND, s) in @(OR, @(LESS, n, @(TIMES, q, q)), @(AND, @(NOT, @(EQUAL, @(MOD, n, q), OO)), @(NODIVSIN, z))) in @(NODIVSIN, l) in @(GIF_THEN_ELSE0, @(Stream, Nat), @(ISPRIME, h), , @(PRIMESIN, t)); def DOPRIMES = ; def PRIMES = \(l : @(Stream, Nat)) let ISPRIME = \(n : @(Stream, let h = @(FST, lec NODIVSIN : let NODIVSIN =
Nat)) n), t = @(SND, n) in [@(Stream, Nat) -> @(Stream, Bool)] in \(s : @(Stream, Nat)) let q = @(FST, s), z = @(SND, s) in let b = @(GIF_THEN_ELSE, Bool, @(LESS, h, @(TIMES, q, q)), @(NOT, @(EQUAL, @(MOD, h, q), OO)), FF) in in
Dataflow Programming
83
@(NODIVSIN, l) in @(WHENEVER, l, @(ISPRIME, l)); def DOPRIMES2 = ;
For the sake of variety, let us now try to completely different strategy. A number is prime if it is not composite; this suggests that we generate the stream of all composite numbers and take its complement (with respect to the natural numbers greater than 1). Our first subproblem is that of defining the COMPLEMENT filter. def COMPLEMENT = \(x : @(Stream, Nat)) lec xx : @(Stream, Nat), ii : @(Stream, Nat) in let xx = @(UPON, x, @(SEQUAL, ii, xx)), ii = @(DoNatStr, TWO) in @(WHENEVER, ii, @(NOTSTR, @(SEQUAL, ii, xx))); dec PM : [@(Stream, Nat) -> @(Stream, Nat)]; def PM = \(x : @(Stream, Nat)) let i = @(DoNatStr, THREE) in ; dec PRIMES2 : @(Stream, Nat); def PRIMES2 = let composite = @(PM, @(FBY2, Nat, TWO, @(SND, PRIMES2))) in @(COMPLEMENT, composite);
6.3
Boolean Streams
def BoolStream = @(Stream, Bool);
6.3.1
Intersection Operator
The logical intersection of two streams of Boolean value is defined as follows: dec ANDSTR : [@(Stream, Bool) -> @(Stream, Bool) -> @(Stream, Bool)]; def ANDSTR = \(s1 : @(Stream, Bool), s2 : @(Stream, Bool)) let h1 = @(FST, s1), h2 = @(FST, s2),
Dataflow Programming
84
t1 = @(SND, s1), t2 = @(SND, s2) in ;
6.3.2
Union Operator
The logical union of two streams of Boolean value is defined as follows: dec ORSTR : [@(Stream, Bool) -> @(Stream, Bool) -> @(Stream, Bool)]; def ORSTR = \(s1 : @(Stream, Bool), s2 : @(Stream, Bool)) let h1 = @(FST, s1), h2 = @(FST, s2), t1 = @(SND, s1), t2 = @(SND, s2) in ;
6.3.3
Negation Operator
The negative operator of a given stream of Boolean value is defined as follows: dec NOTSTR : [BoolStream -> BoolStream]; def NOTSTR = \(s : BoolStream) let h = @(FST, s), t = @(SND, s) in ;
6.3.4
Implication Operator
We will also be able to define the implication operator for two given streams of Boolean value. dec IMPLYSTR : [@(Stream, Bool) -> @(Stream, Bool) -> @(Stream, Bool)]; def IMPLYSTR = \(s1 : @(Stream, let h1 = @(FST, t1 = @(SND, ;
Conditional Operator
A conditional operator for a Boolean stream is defined as follows: dec GIFTHELSTR :
Dataflow Programming
85
!(T : Prop) [@(Stream, Bool) -> @(Stream, T) -> @(Stream, T) -> @(Stream, T)]; def GIFTHELSTR = \(T : Prop, b : @(Stream, Bool), e1 : @(Stream, T), e2 : @(Stream, T)) @(GIF_THEN_ELSE0, @(Stream, T), @(FST, b), , );
6.4 6.4.1
Dining Philosopher Problem Problem Description
This problem is an adaptation of the one posed by E. W. Dijkstra. Five philosophers spend their lives eating spaghetti and thinking. They eat at a circular table in a dining room. The table has five chhairs around it and chair number i has been assigned to philosopher number i (0 ≤ i ≤ 4). Five forks have also been laid out on the table so that there is precisely one fork between every adjacent two chairs. Consequently one fork to the left of each chair and one to its right. Fork number i is to the right of chair number i. In order to be able to eat, a philosopher must enter the dining room and sit in the chair assign to him. A philosopher must have two forks to eat (the forks placed to the left and right of every chair). If the philosopher cannot get two forks immediately, then he must wait until he can get them. The forks are picked up one at a time with the left fork being picked up first. When a philosopher is finished eating (after a finite amount of time), he puts the forks down and leave the room. The dining philosophers problem has been studied extensively in the computer science literature. It is used as a benchmark to check the appropriateness of concurrent programming facilities and of proof techniques for concurrent programs. It is interesting because, despite its apparent simplicity, it illustrates many of the problems, such as shared resources and deadlock, encountered in concurrent programming. The forks are the resources shared by the philosophers who represent the concurrent processes.
Dataflow Programming
6.4.2
86
A Dataflow Solution in PowerEpsilon
We represent the problem by defining each fork as a Boolean stream and each philosopher as a filter with two fork streams (left fork and right fork) as the inputs. Since each philosopher needs two forks, each fork stream has to be split into two substreams. def Fork = Bool; dec PHIL : [@(Stream, Fork) -> @(Stream, Fork) -> Prop]; def PHIL = \(f1 : @(Stream, Fork), f2 : @(Stream, Fork)) let h1 = @(FST, f1), h2 = @(FST, f2), t1 = @(SND, f1), t2 = @(SND, f2) in @(PHIL, t1, t2); dec DINING : [@(Stream, @(Stream, @(Stream, @(Stream, @(Stream, Prop];
Fork) Fork) Fork) Fork) Fork)
-> -> -> -> ->
def DINING = \(q1 : @(Stream, Fork), q2 : @(Stream, Fork), q3 : @(Stream, Fork), q4 : @(Stream, Fork), q5 : @(Stream, Fork)) let r1 = @(FSPLIT, Fork, q1), r2 = @(FSPLIT, Fork, q2), r3 = @(FSPLIT, Fork, q3), r4 = @(FSPLIT, Fork, q4), r5 = @(FSPLIT, Fork, q5) in let r11 = @(PJ1, @(Stream, Fork), r12 = @(PJ2, @(Stream, Fork), r21 = @(PJ1, @(Stream, Fork), r22 = @(PJ2, @(Stream, Fork), r31 = @(PJ1, @(Stream, Fork), r32 = @(PJ2, @(Stream, Fork), r41 = @(PJ1, @(Stream, Fork), r42 = @(PJ2, @(Stream, Fork), r51 = @(PJ1, @(Stream, Fork), r52 = @(PJ2, @(Stream, Fork), let s1 = @(PHIL, r52, r11),
@(Stream, @(Stream, @(Stream, @(Stream, @(Stream, @(Stream, @(Stream, @(Stream, @(Stream, @(Stream,
Fork), Fork), Fork), Fork), Fork), Fork), Fork), Fork), Fork), Fork),
r1), r1), r2), r2), r3), r3), r4), r4), r5), r5) in
Dataflow Programming
s2 = s3 = s4 = s5 = Null;
@(PHIL, @(PHIL, @(PHIL, @(PHIL,
87
r12, r22, r32, r42,
r21), r31), r41), r51) in
The fairness split operator FSPLIT is used here so that a fork stream will be fairly divided into two streams. Notices that if an unfairness split operator is used then a deadlock situation may happen that each PHIL operator will wait on one fork stream forever. dec FORKSTREAM : @(Stream, Fork); def FORKSTREAM = ; def DINING_SYSTEM = let q1 = FORKSTREAM, q2 = FORKSTREAM, q3 = FORKSTREAM, q4 = FORKSTREAM, q5 = FORKSTREAM in @(DINING, q1, q2, q3, q4, q5);
6.4.3
An Imperative Solution in C
The five philosophers are implemented as tasks and the five forks are implemented as semaphores. On creation, each philosopher is given an identification. Each philosopher is motal and passes on to the next world soon after having eaten 100,000 times (about three times a day for 90 years). Philosophers were implemented as five tasks and forks were implemented as semaphores. A variation of the above problem is to allow a philosopher to sit in any chair. This variation will result in a smaller average waiting time for eating for the philosophers. This scheme can be implemented by declaring a new task that is called by every philosopher to request a chair (preferably one with free forks). On leaving the dining room, a philosopher informs this task that the chair is vacant. In the solution given, no individual philosopher will be blocked indefinitely from eating, i.e., starve, because the philosophers pick up the forks in the first-in first-out order (the discipline associated with task queue with same priority). However, there is a possibility of deadlock in the solution given above, e.g., each philosopher picks up one fork and waits to get another fork so that he can start eat. Assuming that all the philosophers are obstinate and that none of them will give up his fork until he gets another fork and has eaten, everything will be
Dataflow Programming
88
in a state of suspension and all the philosophers will starve. Deadlock can be avoided in several way; for example, a philosopher may pick up the two forks needed by him only when both forks available. Alternatively, one could add another task called host that makes sure there are at most four philosophers in the dining room at any given time. Each philosopher must request permission to enter the room from the host and must inform him on leaving. More simply, we may allow at most 4 philosophers to enter the room letting at least one philosopher always being able to pick up two forks and to finish the eating. There is no possibility of a deadlock with this change, since at least one philosopher in the room will be able to eat. Since they all eat for a finite time, he will leave and some other philosopher will be able to eat. Root Task void root(void) { void *dummy; void *data_ptr; unsigned unsigned unsigned unsigned unsigned
long long long long long
ioretpb[4], ioretval; date, time, ticks; tid[5]; targ[4]; rc;
int i; /*---------------------------------------------------------------------*/ /* Set date to March 1, 1996, time to 8:30 AM, and start the system */ /* clock running. */ /*---------------------------------------------------------------------*/ date = (1996 %-- Plus --% T -> %-- Times --% T];
Dataflow Programming
dec dec dec dec
IS_WRITE IS_NUMB IS_PLUS IS_TIMES
: : : :
[StackCmd [StackCmd [StackCmd [StackCmd
91
-> -> -> ->
Bool]; Bool]; Bool]; Bool];
dec GET_NUMB : [StackCmd -> Int]; dec LNIL : !(X : Prop) @(Stream, @(List, X)); dec LCONS : !(X : Prop) [X -> @(Stream, @(List, X)) -> @(Stream, @(List, X))]; dec LLCONS : !(X : Prop) [@(Stream, X) -> @(Stream, @(List, X)) -> @(Stream, @(List, X))]; dec LHEAD : !(X : Prop) [@(Stream, @(List, X)) -> @(Stream, X)]; dec LTAIL : !(X : Prop) [@(Stream, @(List, X)) -> @(Stream, @(List, X))]; def StackDevice = \(t : @(Stream, StackCmd)) let ht = @(FST, t), tt = @(SND, t) in lec StrStack2: @(Stream, @(List, Int)) in let StrStack2 = @(FBY2, @(List, Int), @(NIL, Int), @(GIF_THEN_ELSE0, @(Stream, @(List, Int)), @(IS_WRITE, ht), @(LTAIL, Int, StrStack2), @(GIF_THEN_ELSE0, @(Stream, @(List, Int)), @(IS_NUMB, ht), let x = @(GET_NUMB, ht) in @(LCONS, Int, x, StrStack2), @(GIF_THEN_ELSE0, @(Stream, @(List, Int)), @(IS_PLUS, ht), let n1 = @(LHEAD, Int, StrStack2), n2 = @(LHEAD, Int, @(LTAIL, Int, StrStack2)), st = @(LTAIL, Int, @(LTAIL, Int, StrStack2)) in
Dataflow Programming
@(LLCONS, Int, @(ADDISTR, n1, n2), @(GIF_THEN_ELSE0, @(Stream, @(List, Int)), @(IS_TIMES, ht), let n1 = @(LHEAD, Int, StrStack2), n2 = @(LHEAD, Int, @(LTAIL, Int, st = @(LTAIL, Int, @(LTAIL, Int, @(LLCONS, Int, @(MULISTR, n1, n2), @(LTAIL, Int, StrStack2)))))) in @(WHENEVER2, Int, @(LHEAD, Int, StrStack2), @(IS_WRITE, ht));
6.6 6.6.1
92
st),
StrStack2)), StrStack2)) in st),
Dataflow Sorting Bubble Sort
The Bubble Sort algorithm can be viewed as involving going over the vector from beginning to end repeatedly until the array is in order. Bubble sorting is a kind of combing operation: the comb is dragged across the vector over and over until it is finally ‘smooth’ (sorted). Normally, one imagines the procedures being carried out with only one comb, so that the second pass cannot begin until the first pass is ended. This assumption, however, is just a reflection the strictly sequential, one-operation-at-a-time outlook characteristic of the imperative languages. In reality, there is no reason not to use more than one comb and start the second pass before the first has finished. The second ‘comb’ could follow close behind the first. 6.6.1.1
Bubble Sort, Version 1
dec SLENGTH : [@(Stream, Nat) -> Nat]; def BUBBLE = \(a : @(Stream, Nat)) lec max : @(Stream, Nat) in let larger = \(x : @(Stream, Nat), y : @(Stream, Nat)) @(GIF_THEN_ELSE0, @(Stream, Nat), @(IS_END_OF_DATA, Nat, y), y, @(SIFTHENELSE, Nat, @(SLESS, x, y),
Dataflow Programming
93
y, x)), smaller = \(x : @(Stream, Nat), y : @(Stream, Nat)) @(GIF_THEN_ELSE0, @(Stream, Nat), @(IS_END_OF_DATA, Nat, y), x, @(SIFTHENELSE, Nat, @(SLESS, x, y), x, y)), max = @(FBY2, Nat, @(FST, a), @(larger, max, @(SND, a))) in @(smaller, max, @(SND, a)); dec BUBBLES : [@(Stream, Nat) -> Nat -> @(Stream, Nat)]; def BUBBLES = \(a : @(Stream, Nat), n : Nat) @(GIF_THEN_ELSE0, @(Stream, Nat), @(IS_ZERO, n), a, @(BUBBLES, @(BUBBLE, a), @(PP, n))); def BSORT = \(a : @(Stream, Nat)) @(BUBBLES, a, @(SLENGTH, a));
6.6.1.2
Bubble Sort, Version 2
Unfortunately, this version is a little bit too simple minded. For one thing, it needs to know the length of a, which means using SLENGTH to go all the way to the end of a before any processing begins. Also, each bubble filter passes over the whole streams when we know that the input to the nth filter has its first N − 1 components already in order. Here is a better version that does not need length and that strips off the last output if each filter before passing it on to succeeding ones. def FOLLOW = \(x : @(Stream, Nat), y : @(Stream, Nat)) lec xdone : @(Stream, Bool) in let xdone = @(FBY2, Bool, @(SEQ, Nat, x, @(SNIL, Nat)), xdone) in
Dataflow Programming
94
@(SIFTHENELSE, Nat, xdone, @(UPON, y, xdone), x); def LAST = \(x : @(Stream, Nat)) @(FBY2, Nat, @(ASA2, x, @(IS_END_OF_DATA, Nat, @(SND, x))), @(SNIL, Nat)); def ALL_BUT_LAST = \(x : @(Stream, Nat)) @(GIF_THEN_ELSE0, @(Stream, Nat), @(NOT, @(IS_END_OF_DATA, Nat, @(SND, x))), x, @(SNIL, Nat)); dec BSORT2 : [@(Stream, Nat) -> @(Stream, Nat)]; def BSORT2 = \(a : @(Stream, Nat)) let b = @(BUBBLE, a) in @(GIF_THEN_ELSE0, @(Stream, Nat), @(IS_END_OF_DATA, Nat, a), a, @(FOLLOW, @(BSORT2, @(ALL_BUT_LAST, b)), @(LAST, b)));
6.6.1.3
Bubble Sort, Version 3
We can try to make our program even smarter by checking if the first bubble pass was successful in completely ordering the stream, and so avoid later computation. This ‘clever’ version is as follows: dec GASA2 : !(T : Prop) [@(Stream, T) -> Bool -> T]; def INORDER = \(b : @(Stream, Nat)) lec inordersofar : @(Stream, Bool) in let inordersofar = in @(GASA2, Bool, inordersofar, @(SEQ, Nat, @(SND, b), @(SNIL, Nat)));
Dataflow Programming
95
dec BSORT3 : [@(Stream, Nat) -> @(Stream, Nat)]; def BSORT3 = \(a : @(Stream, Nat)) let b = @(BUBBLE, a) in @(GIF_THEN_ELSE0, @(Stream, Nat), @(IS_END_OF_DATA, Nat, a), a, @(GIF_THEN_ELSE0, @(Stream, Nat), @(INORDER, b), b, @(FOLLOW, @(BSORT3, @(ALL_BUT_LAST, b)), @(LAST, b))));
6.6.2
Merge Sort
The bubble sort is in a sense ‘iterative’: the stream passes through a series of filters in a straight line: there is no branching in the flow. There are also a whole family of dataflow sorts which are ‘hierarchical’ or ‘recursive’ in the sense that they divide the input into two parts, sort the parts, then recombine them. The first of these recursive ones is ‘merge’ sort: the input stream is divided (by taking alternative items) into two streams, each of these is sent to be sorted, and the result is then merged. Here is the network. The MERGE function here is a revised version of the definition used earlier; it has been revised to handle pseudo-finite streams, i.e., streams in which end-ofdata appears. dec DoBoolStr : [Bool -> @(Stream, Bool)]; def DoBoolStr = \(n : Bool) @(SCONS, Bool, n, @(DoBoolStr, n)); dec DoNatStr : [Nat -> @(Stream, Nat)]; def DoNatStr = \(n : Nat) @(SCONS, Nat, n, @(DoNatStr, n)); dec MERGE : [@(Stream, Nat) -> @(Stream, Nat) -> @(Stream, Nat)]; def MERGE = \(x : @(Stream, Nat), y : @(Stream, Nat)) lec takexx : @(Stream, Bool) in let xx = @(UPON, x, takexx),
Dataflow Programming
96
yy = @(UPON, y, @(NOTSTR, takexx)) in let takexx = @(GIF_THEN_ELSE0, @(Stream, Bool), @(IS_END_OF_DATA, Nat, yy), @(DoBoolStr, TT), @(GIF_THEN_ELSE0, @(Stream, Bool), @(IS_END_OF_DATA, Nat, xx), @(DoBoolStr, FF), @(SLESS, xx, yy))) in @(SIFTHENELSE, Nat, takexx, xx, yy);
The merge sort algorithm is then given as follows: dec MSORT : [@(Stream, Nat) -> @(Stream, Nat)]; def MSORT = \(a : @(Stream, Nat)) @(GIF_THEN_ELSE0, @(Stream, Nat), @(IS_END_OF_DATA, Nat, @(SND, a)), a, lec p : @(Stream, Bool) in let p = , b0 = @(WHENEVER, a, p), b1 = @(WHENEVER, a, @(NOTSTR, p)) in @(MERGE, @(MSORT, b0), @(MSORT, b1)));
6.6.3
Quick Sort
We now give another recursive dataflow sort, which works as follows: the input stream is divided again into two, but not at random; the first half of those less than the first element in the stream, and the second half of those gtreater than or equal to the first datum. By dividing the stream this way, we save ourselves a merge; we just output all of the sorted first half and then all of the sorted second half. Here is the program: dec QSORT : [@(Stream, Nat) -> @(Stream, Nat)]; def QSORT = \(a : @(Stream, Nat)) @(GIF_THEN_ELSE0, @(Stream, Nat), @(IS_END_OF_DATA, Nat, a), a, let p = @(SLESS, @(DoNatStr, @(FST, a)), a),
Dataflow Programming
97
b0 = @(WHENEVER, a, p), b1 = @(WHENEVER, a, @(NOTSTR, p)) in @(FOLLOW, @(QSORT, b0), @(QSORT, b1)));
6.7 6.7.1
Real-Time Programming Watchdogs
Dataflow programs of signal processing systems are very close to their specification in terms of systems of dynamical equations. However, many systems have an important logical component, and some of them, for instance monitoring systems, are essentially logical systems. Such systems are most often described in terms of automata, parallel automata (STATECHARTS for instance), and Petri nets, i.e., imperative formalisms which describe states and transitions between states. In this section we shall consider three versions of a “watchdog”, e.g., a device that monitors response times. The watchdog is a function having three Boolean inputs: set, reset and deadline and emitting a Boolean output alarm. The alarm is true when deadline is true and the last true command is set. dec TTTSTR : BoolTStream; dec FFTSTR : BoolTStream; def TTTSTR = ; def FFTSTR = ; def WatchDog1 = \(set : @(TStream, Bool), reset : @(TStream, Bool), deadline : @(TStream, Bool)) let prereset = @(TPRES, Bool, reset) in let folstr = @(GIFTHELTSTR, Bool, set, TTTSTR, @(GIFTHELTSTR, Bool, reset, FFTSTR, prereset)) in let is_set = @(INIT, Bool, set, folstr) in let alarm = @(ANDTSTR, deadline, is_set) in alarm;
We can further assume that set and reset commands never take place at the same time, which can be expressed by an assertion. def WDAssert =
Dataflow Programming
98
\(set : @(TStream, Bool), reset : @(TStream, Bool)) @(NOTTSTR, @(ANDTSTR, set, reset));
Let us now consider a second version which receives the same commands, but raises the alarm when no reset has occurred for a given time since the last set, this time being given as a number of basic clock cycles. This program reuses function WatchDog1, by providing it with an appropriate deadline parameter: on reception of a set event, a register is initialized, which is then decremented. Deadline occurs when the register value reaches zero; it is built from a general purpose function EDGE which returns true at each rising edge of its input: dec GTNAT : [@(TStream, Nat) -> Nat -> @(TStream, Bool)]; def GTNAT = \(s : @(TStream, Nat), n : Nat) let h = @(FST, s), t = @(SND, s) in let hh = @(PJ1, Nat, Bool, h), ht = @(PJ2, Nat, Bool, h) in ; dec EQNAT : [@(TStream, Nat) -> Nat -> @(TStream, Bool)]; def EQNAT = \(s : @(TStream, Nat), n : Nat) let h = @(FST, s), t = @(SND, s) in let hh = @(PJ1, Nat, Bool, h), ht = @(PJ2, Nat, Bool, h) in ; dec PPNAT : [@(TStream, Nat) -> @(TStream, Nat)]; def PPNAT = \(s : @(TStream, Nat)) let h = @(FST, s), t = @(SND, s) in let hh = @(PJ1, Nat, Bool, h), ht = @(PJ2, Nat, Bool, h) in ; def EDGE = \(b : BoolTStream) let presb = @(TPRES, Bool, b) in @(INIT, Bool, FFTSTR, @(ANDTSTR, b, @(NOTTSTR, presb))); def WatchDog2 = \(set : @(TStream, Bool), reset : @(TStream, Bool), delay : @(TStream, Nat)) lec remain : @(TStream, Nat) in let prerem = @(TPRES, Nat, remain) in let remain = @(GIFTHELTSTR,
Dataflow Programming
99
Nat, set, delay, @(GIFTHELTSTR, Nat, @(GTNAT, prerem, OO), @(PPNAT, prerem), prerem)) in let deadline = @(INIT, Bool, FFTSTR, @(EDGE, @(EQNAT, remain, OO))) in let alarm = @(WatchDog1, set, reset, deadline) in alarm;
Assume now that the delay is expressed according to a given time-scale, i.e., as a number of occurrences of an event time timeunit. We just have to call WatchDog2 with a appropriate clock: WatchDog2 must catch any time units timeunit, any commands, and must be properly initialized so that alarm never yields nil. def WatchDog3 = \(set : @(TStream, Bool), reset : @(TStream, Bool), timeunit : @(TStream, Bool), delay : @(TStream, Nat)) let clock = @(INIT, Bool, TTTSTR, @(ORTSTR, @(ORTSTR, set, reset), timeunit)) in let nset = @(TWHENDO, Bool, set, clock), nreset = @(TWHENDO, Bool, reset, clock), ndelay = @(TWHENDO, Nat, delay, clock) in let alarm = @(TCURRENT, Bool, @(WatchDog2, nset, nreset, ndelay)) in alarm;
6.7.2
Kilometer Meter
A kilometer meter is attached to the wheel of a bicycle. This meter consists of a magnetic sensor (attached to the fork of the bicycle), a transmitter (permanent
Dataflow Programming
100
magnet attached to a spoke of the wheel.) Its function is to measure, to within one wheel perimeter, the distance traveled by the cyclist. The cyclist travels through one wheel turn each time the magnet passes in front of the sensor. 6.7.2.1
Dataflow Model
The two constants of the description are PI and wheel diameter Dist. dec Real : Prop; dec DOREALSTR : [Real -> @(TStream, Real)]; dec PI : @(TStream, Real); dec Dist : @(TStream, Real); dec REPEATZERO : @(TStream, Real); dec REALSTRPLUS : [@(TStream, Real) -> @(TStream, Real) -> @(TStream, Real)]; dec REALSTRTIMES : [@(TStream, Real) -> @(TStream, Real) -> @(TStream, Real)];
The inputs of the system are Boolean recpt which is true (or false) when the transmitter passes (or does not pass) in front of the sensor from the cycle – event and a Boolean init indicating a start. def Distance = \(init : @(TStream, Bool), recpt : @(TStream, Bool)) lec dist : @(TStream, Real) in let dist = @(GIFTHELTSTR, Real, init, REPEATZERO, @(GIFTHELTSTR, Real, recpt, @(REALSTRPLUS, @(TPRES, Real, dist), @(REALSTRTIMES, PI, Dist)), @(TPRES, Real, dist))) in dist;
The interlacing of the model flows is represented on the figure below. The spaces represent a cycle on the discrete time scale.
Dataflow Programming
PI Dist init recpt Distance
101
3.14 1 TT TT 0.0
3.14 1 FF TT 3.14
3.14 1 FF TT 6.28
3.14 1 FF TT 9.42
3.14 1 FF TT 12.46
3.14 1 FF TT 15.60
Table 6.1: Interlace of Dataflow
6.7.2.2
Calculating Cycle Time
The execution time for this model depends on the performance of hardware on which it will be installed. The maximum limit for cycle time beyond which the result will be erroneous (that is BORNMAX) must be determined. Placed in a real environment, the model reads the value of input recpt in its queue. This value will be read every BORNMAX. Let VMAX be the maximum speed of the bicycle. The shortest time between two passes is Tmin = PI × Dist/VMAX. Program cycle time must be therefore be less than Tmin : TCycle < PI × Dist/VMAX This condition is the synchronism hypothesis translated into the terms of the problem to be addressed. By taking Dist = 1m VMAX = 100km/h BORNMAX = 0.11s and therefore TCycle < 0.11s
Part III
Semantics of Dataflow Languages
102
Chapter 7
Synchronous Dataflow Language SCADE/LUSTRE SCADE/LUSTRE is a synchronous language based on the dataflow model. It is used for describing and checking real-time systems [16, 17, 4, 3].
7.1 7.1.1
Synchronous Model Zero-Time Computation Assumption
The principle of synchronous programming is based on the idea of zero time computation. The synchronous programmer assumes that any computational activity of a synchronous program including communication takes no time: softtime in a synchronous program is always zero. A synchronous program is executed in the context of some physical or computational process that generates events as stimulus for the program. A synchronous program reacts to events in zero time by computing instantaneously some output (reaction) based on the input and control state of the program. A synchronous program is deterministic if it computes at most one reaction for any event and control state, and reactive if it computes at least one reaction for any event and control state, i.e., if it terminates. The reactivity of a synchronous program, i.e., the instantaneous and deterministic reactions to some stimulus, is what concerns the synchronous programmer. Synchronous programming is therefore often referred to as synchronous reactive programming. The problem of a compiler for synchronous programs is to implement reactivity. Since cyclic language constructs are present in many
103
Dataflow Language SCADE/LUSTRE
104
synchronous programming languages, the compiler typically needs to prove the existence of finite fixed-points or, in other words, the absence of infinite cycles in a synchronous program. Depending on the representation of the control state of a synchronous program, proving reactivity is complex C in languages with explicit control flow such as Esterel C or less difficult C in data-flow languages such as LUSTRE. In the programmers mind mapping soft-time in the synchronous model to real-time is simple. A synchronous and reactive program always runs at the speed of its context and, if the program is deterministic, even its output is determined by the behavior of its context. From the perspective of the (physical) context the behavior of a deterministic and reactive program in the synchronous model is perfect. Context and program are synchronous. An implementation of a synchronous program may approximate synchrony by computing any reaction to an event before the next event occurs while the exact time when a reaction is completed may vary as shown in Figure 3. Thus a compiler for synchronous programs ideally implements not only code generation but also a prover for reactivity and synchrony. A difficult part of showing synchrony is to estimate the (worst-case) execution times of program code.
7.1.2
The Synchronism Hypothesis
In most “real-time” languages one can write delay instruction with classical temporal units (typically seconds). Hence one can write something like delay 2; delay 3; How does one define what that statement means? Is it equivalent to the statement delay 5? The answer to the first question is often nt given, and the answer to the second question may very well ne negative since usual languages are essentially asynchronous. Take for example the OCCAM language, where an external event such as “second” is treated just as an ordinary message (and hence receives a clear semantics). Then the mentioned equivalence has no reason to be true, since messages are treated in a purely asynchronous way: one may expect the equivalence to be “true” if the implementation is “fast enough”, but one has no real control on what will really happen. For actual real-time applications that approach is not sufficient, since one wants a program to respond to externally generated input stimuli within a “controllable” delay. Although the word “controllable” is vague enough, it is in no way a synonym of “arbitrary” and asynchrony can do little for us here. Temporal statements should be semantically well-defined at least at a “conceptual level” suited to reason about programs, and the validity of their implementation should be really checkable (i.e. one should have a reasonable idea of the actual temporal behavior of the compiled code).
Dataflow Language SCADE/LUSTRE
105
The synchronous model was introduced to provide ideal primitives for reasoning the reactive system instantaneously processing the external data. Each system output is dated with regard to input dataflow. A discrete time scale is introduced. Its granularity is considered, a priori, as being adapted to the time constraints imposed by the dynamics of the environment with which the system to be described interacts. It is checked a posteriori. Each instant of the scale corresponds to a new calculation cycle, that is, to the arrival of new inputs. The synchronism hypothesis is as follows: the computing means are sufficiently powerful for the calculation time for the outputs with regard to the inputs to be lower than the grain of this discrete time scale. The result is that the outputs are calculated at the same instant (with regard to the discrete time scale) as when the inputs are taken into account.
7.2
Dataflow Model
Dataflow model is based on the principle of processing the data while it is in “motion”, ‘flowing’ through a dataflow network. A dataflow network is a system of nodes and processing stations connected by a number of communication channels or arcs. The dataflow model has many advantages: • Maximal parallelism (the only constraints are the dependencies between data). • Formal mathematical properties (formal verification methods). • Easy construction and modification of programs. • Easy graphic description of a system.
7.3
Synchronous Dataflow Model
The synchronous dataflow approach introduces a temporal dimension into the dataflow model. A natural way of doing this is to relate the time and the data in the flows. Indeed, the handled entities can be naturally interpreted as a timed function. A basic entity (or flow) is a pair formed of • A sequence of value of a given type; • A clock representing a sequence of instances (on discrete time scale). A flow has the ith value of its sequence at ith instance of its clock. The temporal dimension therefore underlies all descriptions of this type of model.
Dataflow Language SCADE/LUSTRE
7.4
106
SCADE/LUSTRE Language
SCADE/LUSTRE is a synchronous language based on the dataflow model. The synchronous interpretation induces constraints on the type of input/output relations that can be expressed: the output of a program at a given instant cannot depend on future inputs (causality principle) and can only depend on a limited quantity of passed inputs (each cycle can conserve the value of the input at previous instant). A SCADE/LUSTRE model describes the relation between the inputs and the outputs of a system. The relation between the inputs and outputs are expressed by means of operators, intermediary variables and constants. Each SCADE/LUSTRE model is structured into operator networks. An operator corresponds to a function of the system and describes a relation between its input and output parameters. It is used by simply passing variables at its inputs and outputs. To take the synchronism hypothesis into account, we assume that each network operator instantaneously responds to its inputs. There are several categories of operators: • The basic operators of the language; • More complex operators, called nodes, whose definitions in SCADE/LUSTRE are given by the user and which are expressed by means of a system of equations. • Imported operators, where only the declaration in SCADE/LUSTRE is given by the users. A SCADE/LUSTRE model is therefore a set of declarations of types, constants and operators. The applications described in SCADE/LUSTRE have a functional behavior independent of the rate at which they run. These applications can therefore be validated functionally (without taking temporal validation into account) by testing them on a machine other than the target machine (in particular on the development platform). The temporal validation is performed on the target machine. If the calculation time is lower than the time interval between two instants of the discrete time scale, it is considered as null and the synchronism hypothesis is checked out. The interval between two instants of the time scale is imposed by the specifications. The calculation time depends on the software and hardware implementation. SCADE/LUSTRE is therefore a determinist language from both a functional and temporal view point.
Dataflow Language SCADE/LUSTRE
7.4.1
107
LUSTRE Program Structure
A LUSTRE system of equations can be represented graphically as a network of operators. For instance, the equation n = 0 -> pre(n) + 1; which defines a counter of basic clock cycles. This naturally suggests some notion of subroutine: a subnetwork can be encapsulated as a new reusable operator which is called a node. A node declaration consists of an interface specification – providing input and output parameters with their types and possibly their clocks – optional internal variables declarations, and a body made of equations and assertions defining outputs and internal variables as a function of inputs. For instance, the following node defines a general purpose counter, having as inputs an initial-and-reset value, an increment value, and a reset event: node COUNTER(val_init, val_incr : int; reset : bool) returns (n : int); let n = val_init -> if reset then val_init else pre(n) + val_incr; tel. Such a node can be functionally instanced in any expression. For instance even = COUNTER(0, 2, false); modulo5 = COUNTER(0, 1, pre(modulo 5)=4); define the sequence of even numbers and the cyclic sequence of modulo 5 numbers, over the basic clock. Similarly, if gamma is an acceleration expressed in meter = second, and its clock’s rate is one per second, one could have speed = COUNTER(0, gamma, false); position = COUNTER(0, speed, false); According to the substitution principle, this is equivalent to: position = COUNTER(0, COUNTER(0, gamma, false), false);
Dataflow Language SCADE/LUSTRE
108
A node may have several outputs; in that case, the output is a tuple. For instance node D_INTEGRATOR(gamma : int) returns (speed, position : int); let speed = COUNTER(0, gamma, false); position = COUNTER(0, speed, false); tel. is instanced as (v, x) = D_INTEGRATOR(g); Concerning clocks, the basic clock of a node is defined by its inputs, so as to be consistent with the dataflow point of view. For instance, expression: COUNTER((0, 1, false) when B) counts only when B is true . In the example, operator when applies to the tuple (0, 1, false). Table shows the result of the expression, and the difference with expression (COUNTER(0, 1, false)) when B, where sampling applies to the output of the node instead of its inputs. This example also stresses B (0,1,false) when B COUNTER((0,1,false) when B) COUNTER(0,1,false) (COUNTER(0,1,false)) when B
true (0,1,false) 0 0 0
false
1
true (0,1,false) 1 2 2
false
3
true (0,1,false) 2 4 4
Table 7.1: Nodes and Clocks the interest of clocks in reuse; had clocks not been available, the only way of getting the same effect would have required to modify the node by adding a “do-nothing” input. A node may admit input parameters with distinct clocks. Then the faster one is the basic clock of the node, and all other clocks must be in the input declaration list. In the following example: node N(millisecond : bool; (x : int; y : bool) when millisecond) returns ... the basic clock of the node is the one of millisecond, and the clock of x and y is the one defined by millisecond. Outputs of a node may have clocks different from its basic clock. Then these clocks should be visible from the outside of the node. Note also that these clocks
Dataflow Language SCADE/LUSTRE
109
are certainly slower than the basic one.
7.4.2
Some Programming Examples
7.4.2.1
Kilometer Meter
A kilometer meter is attached to the wheel of a bicycle. This meter consists of a magnetic sensor (attached to the fork of the bicycle), a transmitter (permanent magnet attached to a spoke of the wheel.) Its function is to measure, to within one wheel perimeter, the distance traveled by the cyclist. The cyclist travels through one wheel turn each time the magnet passes in front of the sensor. let const PCSt1 PI : real = 3.1415926; D : real 1.0; tel; node DIST_PAR(init, reception : Bool) returns (distance : Real); let equa DISTPAR_P1[,] distance = if init then 0 else if reception then pre(distance) + D * PI else pre(distance); tel;
7.4.2.2
Linear Systems
Translating sampled linear systems into LUSTRE programs is quite an obvious task: if systems are expressed in z-transform equations, it amounts to translating the z −1 operator into 0.0 -> pre(). For instance, consider the 2nd order filter: az 2 + bz + c H(z) = 2 z + dz + e The output y = H(z)x can be written: y = ax + (bx − dy)z −1 + (cx − ey)z −2
Dataflow Language SCADE/LUSTRE
110
and yields the following program: const a, b, c, d, e : real. node SECOND_ORDER(x : real) returns (y : real); var u, v : real; let y = a * x + (0. -> pre(u)); u = b * x -> d * y + (0. -> pre(v)); v = c * x - e * y; tel. Furthermore, clocks allow an easy extension to multiply sampled systems. 7.4.2.3
Non-Linear and Time-Varying Systems
Letting identifiers a,b,c,d,e be parameters of the SECOND ORDER node, in? stead of constants, yields a time-varying filter. Non-linear systems are also easy to describe. For instance: y = rho * cos(theta0 -> pre(theta));
7.4.2.4
Logical Systems
From the previous discussion, dataflow programs of signal processing systems are very close to their specification in terms of systems of dynamical equations. However many systems have an important logical component, and some of them, for instance monitoring systems, are essentially logical systems. Such systems are most often described in terms of automata, parallel automata (Statecharts for instance), and Petri nets, i.e., imperative formalisms which describe states and transitions between states. The question about the adequacy of dataflow paradigms to provide easy descriptions of such systems should therefore be carefully checked. The following examples are intended to show that these paradigms may allow easy, incremental and modular descriptions of logical systems. In this subsection we shall consider three versions of a “watchdog”, i.e., a device that monitors response times. The first version receives three events: set and reset commands, and deadline occurrence. The output is an alarm that must be raised whenever a deadline occurs and the last received command was a set. As usual, events are represented by boolean variables whose value true denotes the presence of an event. The watchdog will be a LUSTRE node having three boolean inputs set, reset and deadline and emitting a boolean output alarm. As the order of equations is unimportant, we begin by defining the output:
Dataflow Language SCADE/LUSTRE
111
alarm is true when deadline is true and the last true command is set. Let is-set be a local boolean variable expressing the latter condition. Then, we can write: alarm = deadline and is_set; It remains to define is set, which becomes true any time set is true, and false any time reset is true. Initially, it is true if set is true and false otherwise: is_set = set -> if set then true else if reset then false else pre(is_set); We can furthermore assume that set and reset commands never take place at the same time, which can be expressed by an assertion. The full program is: node WD1(set, reset, deadline : bool) returns (alarm : bool); var is_set: bool; let alarm = deadline and is_set; is_set = set -> if set then true else if reset then false else pre(is_set); assert not(set and reset); tel. Let us consider now a second version which receives the same commands, but raises the alarm when no reset has occurred for a given time since the last set, this time being given as a number of basic clock cycles. This new program reuses node WD1, by providing it with an appropriate deadline parameter: on reception of a set event, a register is initialized, which is then decremented. Deadline occurs when the register value reaches zero; it is built from a general purpose node EDGE which returns true at each rising edge of its input: node EDGE(b : bool) returns (edge : bool); let edge = false -> (b and not pre(b)); tel.
Dataflow Language SCADE/LUSTRE
112
node WD2(set, reset : bool; delay : int) returns (alarm : bool); var remain: int; deadline: bool; let alarm = WD1(set, reset, deadline); deadline = false -> EDGE(remain = 0); remain = if set then delay else if pre(remain) > 0 then pre(remain)? else pre(remain); tel. Assume now that the delay is expressed according to a given time-scale, i.e. as a number of occurrences of an event time unit. We just have to call WD2 with an appropriate clock: WD2 must catch any time units time unit, any commands, and must be properly initialized so that alarm never yields nil: node WD3(set, reset, time_unit : bool; delay : int) returns (alarm : bool); var clock : bool; let alarm = current(WD2((set, reset, delay) when clock)); clock = true -> (set or reset or time_unit); tel. Coming back to the question raised at the beginning of the section, we can see that programs have been written without referring to transitions between states, but rather by describing states in terms of state variables, and by stating the strongest invariant property of each state variable. Then, all state variables will evolve in parallel, thus recreating the global state of the system. It has been shown that any finite state machine can be described by a boolean LUSTRE program. 7.4.2.5
Mixed Logical and Signal Processing Systems
Finally, mixing signal processing and logical systems is quite an easy task: Signal processing parts provide logical ones with Boolean expressions by using relational operators, and conversely, logical components control signal flows by means of conditional operators: if-then-else, when and current.
Chapter 8
Abstract Syntax of SCADE/LUSTRE In LUSTRE, any variable and expression denotes a flow, i.e., a pair made of • A possibly infinite sequence of values of a given type; • A clock, representing a sequence of times. A flow takes the nth value of its sequence of values at the nth time of its clock. Any program, or piece of program has a cyclic behavior, and that cycle defines a sequence of times which is called the basic clock of the program: a flow whose clock is the basic clock takes its nth value at the nth execution cycle of the program. Other, slower, clocks can be defined, thanks to Boolean valued flows: the clock defined by a Boolean flow is the sequence of times at which the flow takes the value true . For instance Table 8.1 displays the time-scales defined by a flow C whose clock is the basic clock, and by a flow C’ whose clock is defined by C. It should be noticed that the clock concept is not necessarily bound to physical Basic Time-Scale C C Time-Scale C’ C’ Time-Scale
1 true 1 false
2 false
3 true 2 true 1
4 true 3 false
5 false
6 true 4 true 2
7 false
8 true 5 true 3
Table 8.1: Boolean Flows and Clocks time. As a matter of fact, the basic clock should be considered as setting the minimal “grain” of time within which a program cannot discriminate external 113
Syntax of SCADE/LUSTRE
114
events, and which corresponds to its response time. If “real time” is required, it can be implemented as an input boolean flow: for instance a flow whose true value indicates the occurrence of a “millisecond” signal. This point of view provides a multiform concept of time: “millisecond” becomes a time-scale of the program among others.
8.1
Lexicon of SCADE/LUSTRE
8.1.1
Identifiers
dec Identifier : Prop; dec dec dec dec
ERR_IDENT ERR_INT ERR_NAT ERR_BOOL
8.1.2
: : : :
Identifier; Int; Nat; Bool;
Identifier List
def Ident_list = @(List, Identifier); def HEAD_ILST = @(HEAD, Identifier); def TAIL_ILST = @(TAIL, Identifier); def IS_EMPTY_ILST = @(IS_EMPTY, Identifier);
8.1.3
Binary Operators
8.1.3.1
Type Definition of Binop
def Binop = !(T : Prop) [T -> T -> T -> T -> T -> T -> T -> T -> T -> T -> T -> T -> T -> T];
8.1.3.2
Constructors
def PLUS = \(T : Prop)
Syntax of SCADE/LUSTRE
\(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x0; def MINUS = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x1; def TIMES = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x2; def DIV = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x3; def MOD = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x4; def EQ = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x5; def UNEQ = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x6; def LT = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x7; def LE = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x8; def GT = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x9;
115
Syntax of SCADE/LUSTRE
def GE = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x10; def ANDSS = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x11; def ORS = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x12; dec ERR_BINOP : Binop;
8.1.3.3
Predicate Functions
def IS_PLUS = \(o : Binop) @(o, Bool, TT, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF); def IS_MINUS = \(o : Binop) @(o, Bool, FF, TT, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF); def IS_TIMES = \(o : Binop) @(o, Bool, FF, FF, TT, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF); def IS_DIV = \(o : Binop) @(o, Bool, FF, FF, FF, TT, FF, FF, FF, FF, FF, FF, FF, FF, FF); def IS_MOD = \(o : Binop) @(o, Bool, FF, FF, FF, FF, TT, FF, FF, FF, FF, FF, FF, FF, FF); def IS_EQ = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, TT, FF, FF, FF, FF, FF, FF, FF); def IS_UNEQ = \(o : Binop)
116
Syntax of SCADE/LUSTRE
@(o, Bool, FF, FF, FF, FF, FF, FF, TT, FF, FF, FF, FF, FF, FF); def IS_LT = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, TT, FF, FF, FF, FF, FF); def IS_LE = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, FF, TT, FF, FF, FF, FF); def IS_GT = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, FF, FF, TT, FF, FF, FF); def IS_GE = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, TT, FF, FF); def IS_ANDSS = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, TT, FF); def IS_ORS = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, TT);
8.1.4
Unary Operators
8.1.4.1
Type Definition of Unop
def Unop = !(T : Prop) [T -> T -> T];
8.1.4.2
Constructors
def UNNOT = \(T : Prop) \(x : T, y : T) x; def UNNEG = \(T : Prop) \(x : T, y : T) y; dec ERR_UNOP
: Unop;
117
Syntax of SCADE/LUSTRE
8.1.4.3
118
Predicate Functions
def IS_UNNOT = \(o : Unop) @(o, Bool, TT, FF); def IS_UNNEG = \(o : Unop) @(o, Bool, FF, TT);
8.2
Syntax of Expressions
SCADE/LUSTRE expressions are constructed from variables, constants, literals, operators (predefined, nodes, imported operators) and activation conditions. Variables should be declared with their types, and variables which do not correspond to inputs should be given one and only one definition, in the form of equations. These are considered in a mathematical sense: the equation “X = E;” defines variable X as being identical to expression E. Both have the same sequence of values and clock. However such an equation is oriented in the sense that it defines X. The way it is used in other equations cannot give it more properties than those which arise from its definition. This provides one important principle of the language, the substitution principle: X can be substituted to E anywhere in the program and conversely. As a consequence, equations can be written in any order, and extra variables can be created so as to give names to subexpressions, without changing the meaning of the program. LUSTRE has only few elementary basic types: Boolean, integer, real, and one type constructor: tuple. However, complex types can be imported from a host language and handled as abstract types (A similar mechanism exists in ESTEREL). Constants are those of the basic types and those imported from the host language (for instance constants of imported types). Corresponding flows have constant sequences of values and their clock is the basic one. Usual operators over basic types are available (arithmetic: +, -, *, /, div, mod; Boolean: and, or, not; relational: =, =; conditional: if-then-else) and functions can be imported from the host language. These are called data operators and only operate on operands sharing the same clock; they operate point-wise on the sequences of values of their operands. For instance, if X and Y are on the basic clock, and their sequences of values are respectively (x1 , x2 , . . . , xn , . . .) and (y1 , y2 , . . . , yn , . . .), the expression if X > 0 then Y+1 else 0 is a flow on the basic clock whose nth value for any integer n is: if xn > 0 then yn + 1 else 0 Besides these operators, LUSTRE has four more which are called “temporal” operators, and which operate specifically on flows: pre, ->,
Syntax of SCADE/LUSTRE
when and current.
8.2.1
Expressions
8.2.1.1
Type Definition of Expression
dec Expr_list : Prop; dec ERR_EXPL : Expr_list; def Expression = !(T : Prop) [[Bool -> T] -> %-- Bool --% [Identifier -> T] -> %-- Identifier --% [Int -> T] -> %-- Integer --% T -> %-- Null --% [Expr_list -> T] -> %-- Tuple --% [T -> Nat -> T] -> %-- Projection --% [Identifier -> Expr_list -> T] -> %-- Call Operation --% [Binop -> T -> T -> T] -> %-- Binary --% [Unop -> T -> T] -> %-- Unary --% [T -> T] -> %-- PRE --% [T -> T -> T] -> %-- Initialization --% [T -> T] -> %-- Current --% [T -> T -> T] -> %-- When --% [T -> T -> T -> T] -> %-- If-Then-Else --% T]; dec ERR_EXPR : Expression;
8.2.1.2
Constructors of Expression
dec MK_TRUE : Expression; dec MK_FALSE : Expression; dec MK_BINOP : [Binop -> Expression -> Expression -> Expression]; dec MK_UNOP : [Unop -> Expression -> Expression]; dec MK_TUPLE : [Expr_list -> Expression]; dec MK_POS : [Expression -> Nat -> Expression]; dec MK_IF : [Expression -> Expression -> Expression -> Expression];
119
Syntax of SCADE/LUSTRE
dec MK_INT : [Int -> Expression]; dec MK_NULL : Expression; def MK_BOOL = \(b : Bool) \(T : Prop) \(bool : [Bool -> T], ident : [Identifier -> T], integ : [Int -> T], null : T, elst : [Expr_list -> T], pos : [T -> Nat -> T], calop : [Identifier -> Expr_list -> T], binop : [Binop -> T -> T -> T], unop : [Unop -> T -> T], pre : [T -> T], init : [T -> T -> T], curr : [T -> T], when : [T -> T -> T], ifth : [T -> T -> T -> T]) @(bool, b); def MK_IDENT = \(i : Identifier) \(T : Prop) \(bool : [Bool -> T], ident : [Identifier -> T], integ : [Int -> T], null : T, elst : [Expr_list -> T], pos : [T -> Nat -> T], calop : [Identifier -> Expr_list -> T], binop : [Binop -> T -> T -> T], unop : [Unop -> T -> T], pre : [T -> T], init : [T -> T -> T], curr : [T -> T], when : [T -> T -> T], ifth : [T -> T -> T -> T]) @(ident, i); def MK_CALL = \(i : Identifier, l : Expr_list) \(T : Prop) \(bool : [Bool -> T],
120
Syntax of SCADE/LUSTRE
ident : integ : null : elst : pos : calop : binop : unop : pre : init : curr : when : ifth : @(calop,
[Identifier -> T], [Int -> T], T, [Expr_list -> T], [T -> Nat -> T], [Identifier -> Expr_list -> T], [Binop -> T -> T -> T], [Unop -> T -> T], [T -> T], [T -> T -> T], [T -> T], [T -> T -> T], [T -> T -> T -> T]) i, l);
def MK_PRE = \(e : Expression) \(T : Prop) \(bool : [Bool -> T], ident : [Identifier -> T], integ : [Int -> T], null : T, elst : [Expr_list -> T], pos : [T -> Nat -> T], calop : [Identifier -> Expr_list -> T], binop : [Binop -> T -> T -> T], unop : [Unop -> T -> T], pre : [T -> T], init : [T -> T -> T], curr : [T -> T], when : [T -> T -> T], ifth : [T -> T -> T -> T]) @(pre, @(e, T, bool, ident, integ, null, elst, pos, calop, binop, unop, pre, init, curr,
121
Syntax of SCADE/LUSTRE
when, ifth)); def MK_INIT = \(e1 : Expression, e2 : Expression) \(T : Prop) \(bool : [Bool -> T], ident : [Identifier -> T], integ : [Int -> T], null : T, elst : [Expr_list -> T], pos : [T -> Nat -> T], calop : [Identifier -> Expr_list -> T], binop : [Binop -> T -> T -> T], unop : [Unop -> T -> T], pre : [T -> T], init : [T -> T -> T], curr : [T -> T], when : [T -> T -> T], ifth : [T -> T -> T -> T]) @(init, @(e1, T, bool, ident, integ, null, elst, pos, calop, binop, unop, pre, init, curr, when, ifth), @(e2, T, bool, ident, integ, null, elst, pos, calop, binop,
122
Syntax of SCADE/LUSTRE
unop, pre, init, curr, when, ifth)); def MK_CURRENT = \(e : Expression) \(T : Prop) \(bool : [Bool -> T], ident : [Identifier -> T], integ : [Int -> T], null : T, elst : [Expr_list -> T], pos : [T -> Nat -> T], calop : [Identifier -> Expr_list -> T], binop : [Binop -> T -> T -> T], unop : [Unop -> T -> T], pre : [T -> T], init : [T -> T -> T], curr : [T -> T], when : [T -> T -> T], ifth : [T -> T -> T -> T]) @(curr, @(e, T, bool, ident, integ, null, elst, pos, calop, binop, unop, pre, init, curr, when, ifth)); def MK_WHEN = \(e1 : Expression, e2 : Expression) \(T : Prop) \(bool : [Bool -> T], ident : [Identifier -> T],
123
Syntax of SCADE/LUSTRE
integ : [Int -> T], null : T, elst : [Expr_list -> T], pos : [T -> Nat -> T], calop : [Identifier -> Expr_list -> T], binop : [Binop -> T -> T -> T], unop : [Unop -> T -> T], pre : [T -> T], init : [T -> T -> T], curr : [T -> T], when : [T -> T -> T], ifth : [T -> T -> T -> T]) @(when, @(e1, T, bool, ident, integ, null, elst, pos, calop, binop, unop, pre, init, curr, when, ifth), @(e2, T, bool, ident, integ, null, elst, pos, calop, binop, unop, pre, init, curr, when, ifth));
124
Syntax of SCADE/LUSTRE
8.2.1.3
Predicate Functions of Expression
def IS_IDENT_EXPR = \(e : Expression) @(e, Bool, \(b : Bool) FF, \(ident : Identifier) TT, \(integ : Int) FF, FF, \(elst : Expr_list) FF, \(p : Bool, i : Nat) FF, \(op : Identifier, l : Expr_list) FF, \(op : Binop, b1 : Bool, b2 : Bool) FF, \(op : Unop, b : Bool) FF, \(pre : Bool) FF, \(init : Bool, f : Bool) FF, \(curr : Bool) FF, \(when : Bool, c : Bool) FF, \(if : Bool, th : Bool, el : Bool) FF); def IS_BOOL_EXPR = \(e : Expression) @(e, Bool, \(b : Bool) TT, \(ident : Identifier) FF, \(integ : Int) FF, FF, \(elst : Expr_list) FF, \(p : Bool, i : Nat) FF, \(op : Identifier, l : Expr_list) FF, \(op : Binop, b1 : Bool, b2 : Bool) FF, \(op : Unop, b : Bool) FF, \(pre : Bool) FF, \(init : Bool, f : Bool) FF, \(curr : Bool) FF, \(when : Bool, c : Bool) FF, \(if : Bool, th : Bool, el : Bool) FF); def IS_INT_EXPR = \(e : Expression) @(e, Bool, \(b : Bool) FF, \(ident : Identifier) FF, \(integ : Int) TT,
125
Syntax of SCADE/LUSTRE
FF, \(elst : Expr_list) FF, \(p : Bool, i : Nat) FF, \(op : Identifier, l : Expr_list) FF, \(op : Binop, b1 : Bool, b2 : Bool) FF, \(op : Unop, b : Bool) FF, \(pre : Bool) FF, \(init : Bool, f : Bool) FF, \(curr : Bool) FF, \(when : Bool, c : Bool) FF, \(if : Bool, th : Bool, el : Bool) FF); def IS_NIL_EXPR = \(e : Expression) @(e, Bool, \(b : Bool) FF, \(ident : Identifier) FF, \(integ : Int) FF, TT, \(elst : Expr_list) FF, \(p : Bool, i : Nat) FF, \(op : Identifier, l : Expr_list) FF, \(op : Binop, b1 : Bool, b2 : Bool) FF, \(op : Unop, b : Bool) FF, \(pre : Bool) FF, \(init : Bool, f : Bool) FF, \(curr : Bool) FF, \(when : Bool, c : Bool) FF, \(if : Bool, th : Bool, el : Bool) FF); def IS_CALL_EXPR = \(e : Expression) @(e, Bool, \(b : Bool) FF, \(ident : Identifier) FF, \(integ : Int) FF, FF, \(elst : Expr_list) FF, \(p : Bool, i : Nat) FF, \(op : Identifier, l : Expr_list) TT, \(op : Binop, b1 : Bool, b2 : Bool) FF, \(op : Unop, b : Bool) FF, \(pre : Bool) FF, \(init : Bool, f : Bool) FF, \(curr : Bool) FF,
126
Syntax of SCADE/LUSTRE
\(when : Bool, c : Bool) FF, \(if : Bool, th : Bool, el : Bool) FF); dec IS_BINOP_EXPR : [Expression -> Bool]; dec IS_UNOP_EXPR : [Expression -> Bool]; def IS_TUPLE_EXPR = \(e : Expression) @(e, Bool, \(b : Bool) FF, \(ident : Identifier) FF, \(integ : Int) FF, FF, \(elst : Expr_list) TT, \(p : Bool, i : Nat) FF, \(op : Identifier, l : Expr_list) FF, \(op : Binop, b1 : Bool, b2 : Bool) FF, \(op : Unop, b : Bool) FF, \(pre : Bool) FF, \(init : Bool, f : Bool) FF, \(curr : Bool) FF, \(when : Bool, c : Bool) FF, \(if : Bool, th : Bool, el : Bool) FF); def IS_POS_EXPR = \(e : Expression) @(e, Bool, \(b : Bool) FF, \(ident : Identifier) FF, \(integ : Int) FF, FF, \(elst : Expr_list) FF, \(p : Bool, i : Nat) TT, \(op : Identifier, l : Expr_list) FF, \(op : Binop, b1 : Bool, b2 : Bool) FF, \(op : Unop, b : Bool) FF, \(pre : Bool) FF, \(init : Bool, f : Bool) FF, \(curr : Bool) FF, \(when : Bool, c : Bool) FF, \(if : Bool, th : Bool, el : Bool) FF); def IS_PRE_EXPR = \(e : Expression) @(e,
127
Syntax of SCADE/LUSTRE
Bool, \(b : Bool) FF, \(ident : Identifier) FF, \(integ : Int) FF, FF, \(elst : Expr_list) FF, \(p : Bool, i : Nat) FF, \(op : Identifier, l : Expr_list) FF, \(op : Binop, b1 : Bool, b2 : Bool) FF, \(op : Unop, b : Bool) FF, \(pre : Bool) TT, \(init : Bool, f : Bool) FF, \(curr : Bool) FF, \(when : Bool, c : Bool) FF, \(if : Bool, th : Bool, el : Bool) FF); def IS_INIT_EXPR = \(e : Expression) @(e, Bool, \(b : Bool) FF, \(ident : Identifier) FF, \(integ : Int) FF, FF, \(elst : Expr_list) FF, \(p : Bool, i : Nat) FF, \(op : Identifier, l : Expr_list) FF, \(op : Binop, b1 : Bool, b2 : Bool) FF, \(op : Unop, b : Bool) FF, \(pre : Bool) FF, \(init : Bool, f : Bool) TT, \(curr : Bool) FF, \(when : Bool, c : Bool) FF, \(if : Bool, th : Bool, el : Bool) FF); def IS_CURRENT_EXPR = \(e : Expression) @(e, Bool, \(b : Bool) FF, \(ident : Identifier) FF, \(integ : Int) FF, FF, \(elst : Expr_list) FF, \(p : Bool, i : Nat) FF, \(op : Identifier, l : Expr_list) FF, \(op : Binop, b1 : Bool, b2 : Bool) FF,
128
Syntax of SCADE/LUSTRE
\(op : Unop, b : Bool) FF, \(pre : Bool) FF, \(init : Bool, f : Bool) FF, \(curr : Bool) TT, \(when : Bool, c : Bool) FF, \(if : Bool, th : Bool, el : Bool) FF); def IS_WHEN_EXPR = \(e : Expression) @(e, Bool, \(b : Bool) FF, \(ident : Identifier) FF, \(integ : Int) FF, FF, \(elst : Expr_list) FF, \(p : Bool, i : Nat) FF, \(op : Identifier, l : Expr_list) FF, \(op : Binop, b1 : Bool, b2 : Bool) FF, \(op : Unop, b : Bool) FF, \(pre : Bool) FF, \(init : Bool, f : Bool) FF, \(curr : Bool) FF, \(when : Bool, c : Bool) TT, \(if : Bool, th : Bool, el : Bool) FF); def IS_IF_EXPR = \(e : Expression) @(e, Bool, \(b : Bool) FF, \(ident : Identifier) FF, \(integ : Int) FF, FF, \(elst : Expr_list) FF, \(p : Bool, i : Nat) FF, \(op : Identifier, l : Expr_list) FF, \(op : Binop, b1 : Bool, b2 : Bool) FF, \(op : Unop, b : Bool) FF, \(pre : Bool) FF, \(init : Bool, f : Bool) FF, \(curr : Bool) FF, \(when : Bool, c : Bool) FF, \(if : Bool, th : Bool, el : Bool) TT);
129
Syntax of SCADE/LUSTRE
8.2.1.4
Projectors of Expression
dec GET_EXPR_BIN_OP : [Expression -> Binop]; dec GET_EXPR_BIN_LEXP : [Expression -> Expression]; dec GET_EXPR_BIN_REXP : [Expression -> Expression]; dec GET_EXPR_UNOP : [Expression -> Unop]; dec GET_EXPR_UNEXP : [Expression -> Expression]; def GET_EXPR_BOOL = \(e : Expression) @(e, Bool, \(b : Bool) b, \(ident : Identifier) ERR_BOOL, \(integ : Int) ERR_BOOL, ERR_BOOL, \(elst : Expr_list) ERR_BOOL, \(p : Bool, i : Nat) ERR_BOOL, \(op : Identifier, l : Expr_list) ERR_BOOL, \(op : Binop, b1 : Bool, b2 : Bool) ERR_BOOL, \(op : Unop, b : Bool) ERR_BOOL, \(pre : Bool) ERR_BOOL, \(init : Bool, f : Bool) ERR_BOOL, \(curr : Bool) ERR_BOOL, \(when : Bool, c : Bool) ERR_BOOL, \(if : Bool, th : Bool, el : Bool) ERR_BOOL); def GET_EXPR_INT = \(e : Expression) @(e, Int, \(b : Bool) ERR_INT, \(ident : Identifier) ERR_INT, \(integ : Int) integ, ERR_INT, \(elst : Expr_list) ERR_INT, \(p : Int, i : Nat) ERR_INT, \(op : Identifier, l : Expr_list) ERR_INT, \(op : Binop, b1 : Int, b2 : Int) ERR_INT, \(op : Unop, b : Int) ERR_INT, \(pre : Int) ERR_INT, \(init : Int, f : Int) ERR_INT, \(curr : Int) ERR_INT, \(when : Int, c : Int) ERR_INT, \(if : Int, th : Int, el : Int) ERR_INT);
130
Syntax of SCADE/LUSTRE
131
def GET_EXPR_IDENT = \(e : Expression) @(e, Identifier, \(b : Bool) ERR_IDENT, \(ident : Identifier) ident, \(integ : Int) ERR_IDENT, ERR_IDENT, \(elst : Expr_list) ERR_IDENT, \(p : Identifier, i : Nat) ERR_IDENT, \(op : Identifier, l : Expr_list) ERR_IDENT, \(op : Binop, b1 : Identifier, b2 : Identifier) ERR_IDENT, \(op : Unop, b : Identifier) ERR_IDENT, \(pre : Identifier) ERR_IDENT, \(init : Identifier, f : Identifier) ERR_IDENT, \(curr : Identifier) ERR_IDENT, \(when : Identifier, c : Identifier) ERR_IDENT, \(if : Identifier, th : Identifier, el : Identifier) ERR_IDENT); def GET_EXPR_TUPLE = \(e : Expression) @(e, Expr_list, \(b : Bool) ERR_EXPL, \(ident : Identifier) ERR_EXPL, \(integ : Int) ERR_EXPL, ERR_EXPL, \(elst : Expr_list) elst, \(p : Expr_list, i : Nat) ERR_EXPL, \(op : Identifier, l : Expr_list) ERR_EXPL, \(op : Binop, b1 : Expr_list, b2 : Expr_list) ERR_EXPL, \(op : Unop, b : Expr_list) ERR_EXPL, \(pre : Expr_list) ERR_EXPL, \(init : Expr_list, f : Expr_list) ERR_EXPL, \(curr : Expr_list) ERR_EXPL, \(when : Expr_list, c : Expr_list) ERR_EXPL, \(if : Expr_list, th : Expr_list, el : Expr_list) ERR_EXPL); def GET_POS_EXPR = \(e : Expression) @(e, Expression, \(b : Bool) ERR_EXPR, \(ident : Identifier) ERR_EXPR, \(integ : Int) ERR_EXPR, ERR_EXPR, \(elst : Expr_list) ERR_EXPR,
Syntax of SCADE/LUSTRE
132
\(p : Expression, i : Nat) p, \(op : Identifier, l : Expr_list) ERR_EXPR, \(op : Binop, b1 : Expression, b2 : Expression) ERR_EXPR, \(op : Unop, b : Expression) ERR_EXPR, \(pre : Expression) ERR_EXPR, \(init : Expression, f : Expression) ERR_EXPR, \(curr : Expression) ERR_EXPR, \(when : Expression, c : Expression) ERR_EXPR, \(if : Expression, th : Expression, el : Expression) ERR_EXPR); def GET_POS_INDEX = \(e : Expression) @(e, Nat, \(b : Bool) ERR_NAT, \(ident : Identifier) ERR_NAT, \(integ : Int) ERR_NAT, ERR_NAT, \(elst : Expr_list) ERR_NAT, \(p : Nat, i : Nat) i, \(op : Identifier, l : Expr_list) ERR_NAT, \(op : Binop, b1 : Nat, b2 : Nat) ERR_NAT, \(op : Unop, b : Nat) ERR_NAT, \(pre : Nat) ERR_NAT, \(init : Nat, f : Nat) ERR_NAT, \(curr : Nat) ERR_NAT, \(when : Nat, c : Nat) ERR_NAT, \(if : Nat, th : Nat, el : Nat) ERR_NAT); def GET_EXPR_PRE = \(e : Expression) @(e, Expression, \(b : Bool) ERR_EXPR, \(ident : Identifier) ERR_EXPR, \(integ : Int) ERR_EXPR, ERR_EXPR, \(elst : Expr_list) ERR_EXPR, \(p : Expression, i : Nat) ERR_EXPR, \(op : Identifier, l : Expr_list) ERR_EXPR, \(op : Binop, b1 : Expression, b2 : Expression) ERR_EXPR, \(op : Unop, b : Expression) ERR_EXPR, \(pre : Expression) pre, \(init : Expression, f : Expression) ERR_EXPR, \(curr : Expression) ERR_EXPR, \(when : Expression, c : Expression) ERR_EXPR, \(if : Expression, th : Expression, el : Expression) ERR_EXPR);
Syntax of SCADE/LUSTRE
133
def GET_CALL_IDENT = \(e : Expression) @(e, Identifier, \(b : Bool) ERR_IDENT, \(ident : Identifier) ERR_IDENT, \(integ : Int) ERR_IDENT, ERR_IDENT, \(elst : Expr_list) ERR_IDENT, \(p : Identifier, i : Nat) ERR_IDENT, \(op : Identifier, l : Expr_list) op, \(op : Binop, b1 : Identifier, b2 : Identifier) ERR_IDENT, \(op : Unop, b : Identifier) ERR_IDENT, \(pre : Identifier) ERR_IDENT, \(init : Identifier, f : Identifier) ERR_IDENT, \(curr : Identifier) ERR_IDENT, \(when : Identifier, c : Identifier) ERR_IDENT, \(if : Identifier, th : Identifier, el : Identifier) ERR_IDENT); def GET_CALL_ELIST = \(e : Expression) @(e, Expr_list, \(b : Bool) ERR_EXPL, \(ident : Identifier) ERR_EXPL, \(integ : Int) ERR_EXPL, ERR_EXPL, \(elst : Expr_list) ERR_EXPL, \(p : Expr_list, i : Nat) ERR_EXPL, \(op : Identifier, l : Expr_list) l, \(op : Binop, b1 : Expr_list, b2 : Expr_list) ERR_EXPL, \(op : Unop, b : Expr_list) ERR_EXPL, \(pre : Expr_list) ERR_EXPL, \(init : Expr_list, f : Expr_list) ERR_EXPL, \(curr : Expr_list) ERR_EXPL, \(when : Expr_list, c : Expr_list) ERR_EXPL, \(if : Expr_list, th : Expr_list, el : Expr_list) ERR_EXPL); def GET_EXPR_CUR = \(e : Expression) @(e, Expression, \(b : Bool) ERR_EXPR, \(ident : Identifier) ERR_EXPR, \(integ : Int) ERR_EXPR, ERR_EXPR,
Syntax of SCADE/LUSTRE
134
\(elst : Expr_list) ERR_EXPR, \(p : Expression, i : Nat) ERR_EXPR, \(op : Identifier, l : Expr_list) ERR_EXPR, \(op : Binop, b1 : Expression, b2 : Expression) ERR_EXPR, \(op : Unop, b : Expression) ERR_EXPR, \(pre : Expression) ERR_EXPR, \(init : Expression, f : Expression) ERR_EXPR, \(curr : Expression) curr, \(when : Expression, c : Expression) ERR_EXPR, \(if : Expression, th : Expression, el : Expression) ERR_EXPR); def GET_INIT_INIT = \(e : Expression) @(e, Expression, \(b : Bool) ERR_EXPR, \(ident : Identifier) ERR_EXPR, \(integ : Int) ERR_EXPR, ERR_EXPR, \(elst : Expr_list) ERR_EXPR, \(p : Expression, i : Nat) ERR_EXPR, \(op : Identifier, l : Expr_list) ERR_EXPR, \(op : Binop, b1 : Expression, b2 : Expression) ERR_EXPR, \(op : Unop, b : Expression) ERR_EXPR, \(pre : Expression) ERR_EXPR, \(init : Expression, f : Expression) init, \(curr : Expression) ERR_EXPR, \(when : Expression, c : Expression) ERR_EXPR, \(if : Expression, th : Expression, el : Expression) ERR_EXPR); def GET_INIT_FOLLOW = \(e : Expression) @(e, Expression, \(b : Bool) ERR_EXPR, \(ident : Identifier) ERR_EXPR, \(integ : Int) ERR_EXPR, ERR_EXPR, \(elst : Expr_list) ERR_EXPR, \(p : Expression, i : Nat) ERR_EXPR, \(op : Identifier, l : Expr_list) ERR_EXPR, \(op : Binop, b1 : Expression, b2 : Expression) ERR_EXPR, \(op : Unop, b : Expression) ERR_EXPR, \(pre : Expression) ERR_EXPR, \(init : Expression, f : Expression) f, \(curr : Expression) ERR_EXPR, \(when : Expression, c : Expression) ERR_EXPR,
Syntax of SCADE/LUSTRE
135
\(if : Expression, th : Expression, el : Expression) ERR_EXPR); def GET_WHEN_EXPR = \(e : Expression) @(e, Expression, \(b : Bool) ERR_EXPR, \(ident : Identifier) ERR_EXPR, \(integ : Int) ERR_EXPR, ERR_EXPR, \(elst : Expr_list) ERR_EXPR, \(p : Expression, i : Nat) ERR_EXPR, \(op : Identifier, l : Expr_list) ERR_EXPR, \(op : Binop, b1 : Expression, b2 : Expression) ERR_EXPR, \(op : Unop, b : Expression) ERR_EXPR, \(pre : Expression) ERR_EXPR, \(init : Expression, f : Expression) ERR_EXPR, \(curr : Expression) ERR_EXPR, \(when : Expression, c : Expression) when, \(if : Expression, th : Expression, el : Expression) ERR_EXPR); def GET_WHEN_COND = \(e : Expression) @(e, Expression, \(b : Bool) ERR_EXPR, \(ident : Identifier) ERR_EXPR, \(integ : Int) ERR_EXPR, ERR_EXPR, \(elst : Expr_list) ERR_EXPR, \(p : Expression, i : Nat) ERR_EXPR, \(op : Identifier, l : Expr_list) ERR_EXPR, \(op : Binop, b1 : Expression, b2 : Expression) ERR_EXPR, \(op : Unop, b : Expression) ERR_EXPR, \(pre : Expression) ERR_EXPR, \(init : Expression, f : Expression) ERR_EXPR, \(curr : Expression) ERR_EXPR, \(when : Expression, c : Expression) c, \(if : Expression, th : Expression, el : Expression) ERR_EXPR); def GET_IF_THEN = \(e : Expression) @(e, Expression, \(b : Bool) ERR_EXPR, \(ident : Identifier) ERR_EXPR, \(integ : Int) ERR_EXPR,
Syntax of SCADE/LUSTRE
ERR_EXPR, \(elst : Expr_list) ERR_EXPR, \(p : Expression, i : Nat) ERR_EXPR, \(op : Identifier, l : Expr_list) ERR_EXPR, \(op : Binop, b1 : Expression, b2 : Expression) ERR_EXPR, \(op : Unop, b : Expression) ERR_EXPR, \(pre : Expression) ERR_EXPR, \(init : Expression, f : Expression) ERR_EXPR, \(curr : Expression) ERR_EXPR, \(when : Expression, c : Expression) ERR_EXPR, \(if : Expression, th : Expression, el : Expression) th); def GET_IF_ELSE = \(e : Expression) @(e, Expression, \(b : Bool) ERR_EXPR, \(ident : Identifier) ERR_EXPR, \(integ : Int) ERR_EXPR, ERR_EXPR, \(elst : Expr_list) ERR_EXPR, \(p : Expression, i : Nat) ERR_EXPR, \(op : Identifier, l : Expr_list) ERR_EXPR, \(op : Binop, b1 : Expression, b2 : Expression) ERR_EXPR, \(op : Unop, b : Expression) ERR_EXPR, \(pre : Expression) ERR_EXPR, \(init : Expression, f : Expression) ERR_EXPR, \(curr : Expression) ERR_EXPR, \(when : Expression, c : Expression) ERR_EXPR, \(if : Expression, th : Expression, el : Expression) el); def GET_IF_COND = \(e : Expression) @(e, Expression, \(b : Bool) ERR_EXPR, \(ident : Identifier) ERR_EXPR, \(integ : Int) ERR_EXPR, ERR_EXPR, \(elst : Expr_list) ERR_EXPR, \(p : Expression, i : Nat) ERR_EXPR, \(op : Identifier, l : Expr_list) ERR_EXPR, \(op : Binop, b1 : Expression, b2 : Expression) ERR_EXPR, \(op : Unop, b : Expression) ERR_EXPR, \(pre : Expression) ERR_EXPR, \(init : Expression, f : Expression) ERR_EXPR, \(curr : Expression) ERR_EXPR,
136
Syntax of SCADE/LUSTRE
137
\(when : Expression, c : Expression) ERR_EXPR, \(if : Expression, th : Expression, el : Expression) if);
8.2.2
Expression List
8.2.2.1
Type Definition of Expr list
def Expr_list = @(List, Expression);
8.2.2.2
Constructor of Expr list
def IS_EMP_ELIST = @(IS_EMPTY, Expression);
8.2.2.3
Projectors of Expr list
def HEAD_ELIST = @(HEAD, Expression); def TAIL_ELIST = @(TAIL, Expression);
8.3
Syntax of Equations
Each non-input variable is defined by an equation. The expression defining a variable must be of same type as the variable. An equation can define one or more variables.
8.3.1
Equations
8.3.1.1
Type Definition of Equation
def Equation = !(T : Prop) [[Ident_list -> Expression -> T] -> T];
8.3.1.2
Constructor of Equation
def MK_EQ = \(il : Ident_list, e : Expression)
Syntax of SCADE/LUSTRE
\(T : Prop) \(p : [Ident_list -> Expression -> T]) @(p, il, e);
8.3.1.3
Projectors of Equation
dec GET_EQ_IDLIST : [Equation -> Ident_list]; dec GET_EQ_BODY : [Equation -> Expression];
8.3.2
Equation List
8.3.2.1
Type Definition of Eq list
def Eq_list = @(List, Equation);
8.3.2.2
Constructors of Eq list
def NIL_EQ_LIST = @(NIL, Equation); def CONS_EQ_LIST = @(CONS, Equation); dec ERR_EQL : Eq_list;
8.3.2.3
Predicate Functions of Eq list
def IS_EMPTY_EQ_LIST = @(IS_EMPTY, Equation);
8.3.2.4
Projectors of Eq list
def HEAD_EQ_LIST = @(HEAD, Equation); def TAIL_EQ_LIST = @(TAIL, Equation);
138
Syntax of SCADE/LUSTRE
8.4
139
Syntax of Nodes
A node is a network of operators defined in SCADE/LUSTRE. It is formed by a formal interface (declaration of formal parameters), local variables, equation blocks and parameter blocks. A node is called by substituting its formal parameters by actual parameters called effective parameters. The syntax of node in terms of PowerEpsilon is given as follows:
8.4.1
Types
8.4.1.1
Type Definition of Types
def Types = !(T : Prop) [T -> T -> T];
8.4.1.2
Constructors of Types
def BOOL_TYPE = \(T : Prop) \(u : T, v : T) u; def INT_TYPE = \(T : Prop) \(u : T, v : T) v;
8.4.2
Input Parameters
8.4.2.1
Type Definition of Input par
def Input_par = !(T : Prop) [[Identifier -> Types -> T] -> [Identifier -> Types -> Identifier -> T] -> T];
8.4.2.2
Constructors of Input Par
def MK_INPUT = \(i : Identifier, t : Types) \(T : Prop) \(p : [Identifier -> Types -> T],
Syntax of SCADE/LUSTRE
q : [Identifier -> Types -> Identifier -> T]) @(p, i, t); def MK_WINPUT = \(i : Identifier, t : Types, w : Identifier) \(T : Prop) \(p : [Identifier -> Types -> T], q : [Identifier -> Types -> Identifier -> T]) @(q, i, t, w);
8.4.2.3
Predicate Function of Input Par
dec IS_INPUT_PAR : [Input_par -> Bool];
8.4.2.4
Projectors of Input Par
dec GET_INPUT_IDEN : [Input_par -> Identifier]; dec GET_INPUT_TYPE : [Input_par -> Types]; dec GET_WINPUT_IDEN : [Input_par -> Identifier]; dec GET_WINPUT_TYPE : [Input_par -> Types]; dec GET_WINPUT_WHEN : [Input_par -> Identifier];
8.4.3
Input Parameter List
8.4.3.1
Type Definition of Input par list
def Input_par_list = @(List, Input_par);
8.4.3.2
Constructors of Input par list
def NIL_INPAR_LIST = @(NIL, Input_par); def CONS_INPAR_LIST = @(CONS, Input_par);
140
Syntax of SCADE/LUSTRE
8.4.3.3
Predicate Functions of Input par list
def IS_EMP_INPAR_LIST = @(IS_EMPTY, Input_par);
8.4.3.4
Projectors of Input par list
def HEAD_INPAR_LIST = @(HEAD, Input_par); def TAIL_INPAR_LIST = @(TAIL, Input_par);
8.4.4
Output Parameters
8.4.4.1
Type Definition of Output par
def Output = !(T : Prop) [[Identifier -> Types -> T] -> [Identifier -> Types -> Identifier -> T] -> T];
8.4.4.2
Constructors of Output Par
def MK_OUTPUT = \(i : Identifier, t : Types) \(T : Prop) \(p : [Identifier -> Types -> T], q : [Identifier -> Types -> Identifier -> T]) @(p, i, t); def MK_WOUTPUT = \(i : Identifier, t : Types, w : Identifier) \(T : Prop) \(p : [Identifier -> Types -> T], q : [Identifier -> Types -> Identifier -> T]) @(q, i, t, w);
8.4.4.3
Predicate Function of Output Par
dec IS_OUTPUT : [Output -> Bool];
141
Syntax of SCADE/LUSTRE
8.4.4.4
Projectors of Output Par
dec GET_OUTPUT_IDEN : [Output -> Identifier]; dec GET_OUTPUT_TYPE : [Output -> Types]; dec GET_WOUTPUT_IDEN : [Output -> Identifier]; dec GET_WOUTPUT_TYPE : [Output -> Types]; dec GET_WOUTPUT_WHEN : [Output -> Identifier];
8.4.5
Output Parameter List
8.4.5.1
Type Definition of Output
def Output_list = @(List, Output);
8.4.5.2
Constructors of Output list
def NIL_OUTPAR_LIST = @(NIL, Output); def CONS_OUTPAR_LIST = @(CONS, Output);
8.4.5.3
Predicate Function of Output list
def IS_EMP_OLIST = @(IS_EMPTY, Output);
8.4.5.4
Projectors of Output list
def HEAD_OLIST = @(HEAD, Output); def TAIL_OLIST = @(TAIL, Output);
8.4.6
Header of Nodes
8.4.6.1
Type Definition of Header Node
def Header_Node =
142
Syntax of SCADE/LUSTRE
!(T : Prop) [[Identifier -> Input_par_list -> Output_list -> T] -> T];
8.4.6.2
Constructors of Header Node
def MK_HEADER_NODE = \(i : Identifier, il : Input_par_list, ol : Output_list) \(T : Prop) \(p : [Identifier -> Input_par_list -> Output_list -> T]) @(p, i, il, ol);
8.4.6.3
Projectors of Header Node
dec GET_HEAD_NAME : [Header_Node -> Identifier]; dec GET_HEAD_INLS : [Header_Node -> Input_par_list]; dec GET_HEAD_OULS : [Header_Node -> Output_list];
8.4.7
Local Declaration
8.4.7.1
Type Definition of Local
def Local = !(T : Prop) [[Identifier -> Types -> T] -> [Identifier -> Types -> Identifier -> T] -> T];
8.4.7.2
Constructors of Local
def MK_LOCAL = \(i : Identifier, t : Types) \(T : Prop) \(p : [Identifier -> Types -> T], q : [Identifier -> Types -> Identifier -> T]) @(p, i, t); def MK_WLOCAL = \(i : Identifier, t : Types, w : Identifier) \(T : Prop) \(p : [Identifier -> Types -> T],
143
Syntax of SCADE/LUSTRE
q : [Identifier -> Types -> Identifier -> T]) @(q, i, t, w);
8.4.7.3
Predicate Function of Local
dec IS_LOCAL : [Local -> Bool];
8.4.7.4
Projectors of Local
dec GET_LOCAL_IDEN : [Local -> Identifier]; dec GET_LOCAL_TYPE : [Local -> Types]; dec GET_WLOCAL_IDEN : [Local -> Identifier]; dec GET_WLOCAL_TYPE : [Local -> Types]; dec GET_WLOCAL_WHEN : [Local -> Identifier];
8.4.8
Local Declaration List
def Local_list = @(List, Local);
8.4.8.1
Constructors of Local
def NIL_LOCAL_LIST = @(NIL, Local); def CONS_LOCAL_LIST = @(CONS, Local);
8.4.8.2
Predicate Function of Local
def IS_EMP_LLIST = @(IS_EMPTY, Local);
8.4.8.3
Projectors of Local
def HEAD_LLIST = @(HEAD, Local); def TAIL_LLIST = @(TAIL, Local);
144
Syntax of SCADE/LUSTRE
8.4.9
Node Definition
8.4.9.1
Type Definition of Node Def
145
A node definition consists of a header declaration, a local declaration and a list of equation. def Node_Def = !(T : Prop) [[Header_Node -> Local_list -> Eq_list -> T] -> T];
8.4.9.2
Constructors of Node Def
There is one normal constructor MK NODE DEF and one abnormal constructor ERR NODE DEF. dec MK_NODE_DEF : [Header_Node -> Local_list -> Eq_list -> Node_Def]; dec ERR_NODE_DEF : Node_Def;
8.4.9.3
Projectors of Node Def
Since there are three components in Node Def, there will be three projectors defined. dec GET_NODE_HEAD : [Node_Def -> Header_Node]; dec GET_NODE_LLST : [Node_Def -> Local_list]; dec GET_NODE_EQLS : [Node_Def -> Eq_list];
8.4.10
Node Definition List
A LUSTRE program consists of a list of node definition. def Node_Def_List = @(List, Node_Def);
Chapter 9
Semantics of SCADE/LUSTRE The goal of the designers of LUSTRE was to propose a programming language based on the very simple data-flow model used by most control engineers: their usual formalisms are either systems of equations (differential, finite-difference, Boolean equations) or data-flow networks (analog diagrams, block-diagrams, gates and flip- flops). In such formalisms, each variable that is not an input, is defined exactly once in terms of other variables. One writes “x = y + z”, meaning that at each instant k (or, at each step k) xk = yk +zk . In other words, each variable is a function of time, which is supposed to be discrete in a digital implementation: in basic LUSTRE, any variable or expression denotes a flow, i.e., an infinite sequence of values of its type. A basic LUSTRE program is effectively an infinite loop, and each variable or expression takes the k t h value of its sequence at the k t h step in the loop. All the usual operators - Boolean, arithmetic, comparison, conditional - are implicitly extended to operate pointwise on flows. For example, one writes x = if y >= 0 then y else -y to express that x is equal to the absolute value of y in each step. In the LUSTRE-based commercial SCADE tool, this equation has a graphical counterpart like the block diagram. Notice that constants (like 0, true) represent constant flows. Any combinational function on flows can be described in this way.
146
Semantics of SCADE/LUSTRE
9.1 9.1.1
147
Semantic Objects and Functions Values
We start from defining a set of basic semantic values for SCADE/LUSTRE. There will be two different kind of semantic values: the denotable (or static) semantic value SDDval and the dynamic semantic value SDVal. 9.1.1.1
Type Definition of SDVal
SDVal is used as the value of stream. The definition of type SDVal is given as follows: dec SDVal : Prop; def SDVal = !(T : Prop) [[Bool -> T] -> [Int -> T] -> T];
9.1.1.2
Constructors of SDVal
There are three constructors of SDVal. Two of them are normal constructors: MK SDVAL BOOL and MK SDVAL INT, and one abnormal constructor: ERR SDVAL. dec MK_SDVAL_BOOL : [Bool -> SDVal]; def MK_SDVAL_BOOL = \(v : Bool) \(T : Prop) \(b : [Bool -> T], i : [Int -> T]) @(b, v); dec MK_SDVAL_INT : [Int -> SDVal]; def MK_SDVAL_INT = \(v : Int) \(T : Prop) \(b : [Bool -> T], i : [Int -> T]) @(i, v); dec ERR_SDVAL
: SDVal;
Semantics of SCADE/LUSTRE
9.1.1.3
Projectors of SDVal
def GET_SDVAL_BOOL = \(v : SDVal) @(v, Bool, \(b : Bool) b, \(i : Int) ERR_BOOL); def GET_SDVAL_INT = \(v : SDVal) @(v, Int, \(b : Bool) ERR_INT, \(i : Int) i);
9.1.2
Value List
9.1.2.1
Type Definition of SDVal list
def SDVal_list = @(List, @(TStream, SDVal));
9.1.2.2
Constructors of SDVal list
def NIL_SDVAL_LIST = @(NIL, @(TStream, SDVal)); def CONS_SDVAL_LIST = @(CONS, @(TStream, SDVal)); dec ERR_SDVAL_LIST : SDVal_list; dec ERR_TSDVAL : @(TStream, SDVal); def CONCAT_SDL = @(CONCAT, @(TStream, SDVal));
9.1.2.3
Projectors of SDVal list
def HEAD_SDL = @(HEAD, @(TStream, SDVal)); def TAIL_SDL = @(TAIL, @(TStream, SDVal));
148
Semantics of SCADE/LUSTRE
9.1.2.4
149
Predicate Function of SDVal list
def IS_EMPTY_SDL = @(IS_EMPTY, @(TStream, SDVal));
9.2
Environments
9.2.1
Denotable Value
9.2.1.1
Type Definition of SDDval
def Proc_dval = [Env -> Expr_list -> SDVal_list]; def SDDval = @(Sum2, SDVal_list, Proc_dval); \(sdl : SDVal_list) @(INJ1, SDVal_list, Proc_dval, sdl);
9.2.1.2
Constructors of SDDval
def MK_SDDVAL_LIST = \(sdl : SDVal_list) @(INJ1, SDVal_list, Proc_dval, sdl); def MK_SDDVAL_PROC = \(pro : Proc_dval) @(INJ2, SDVal_list, Proc_dval, pro); dec ERR_SDDVAL : SDDval; dec ERR_PROC_DVAL : Proc_dval;
9.2.1.3
Predicate Function of SDDval
dec SDDVAL_EQ : [SDDval -> SDDval -> Bool];
9.2.2
Environment
9.2.2.1
Type Definition of Env
def Env = [Identifier -> SDDval];
def MK_SDDVAL_LIST =
Semantics of SCADE/LUSTRE
dec ENV_EQ : [Env -> Env -> Bool]; dec IDEN_EQ : [Identifier -> Identifier -> Bool];
9.2.2.2
Constructors of Env
dec EMPTY_ENV : Env; def ENTER_ENV = \(r : Env, i : Identifier, t : SDDval) \(j : Identifier) @(GIF_THEN_ELSE, SDDval, @(ENV_EQ, r, EMPTY_ENV), ERR_SDDVAL, @(GIF_THEN_ELSE, SDDval, @(IDEN_EQ, i, j), t, @(r, j))); def ENTER_LOC_ENV = \(r : Env, i : Identifier, t : SDVal_list) \(j : Identifier) @(GIF_THEN_ELSE, SDDval, @(ENV_EQ, r, EMPTY_ENV), @(INJ1, SDVal_list, Proc_dval, ERR_SDVAL_LIST), @(GIF_THEN_ELSE, SDDval, @(IDEN_EQ, i, j), @(INJ1, SDVal_list, Proc_dval, t), @(r, j))); def ENTER_PRO_ENV = \(r : Env, i : Identifier, t : Proc_dval) \(j : Identifier) @(GIF_THEN_ELSE, SDDval, @(ENV_EQ, r, EMPTY_ENV), @(INJ2, SDVal_list, Proc_dval, ERR_PROC_DVAL), @(GIF_THEN_ELSE, SDDval, @(IDEN_EQ, i, j), @(INJ2, SDVal_list, Proc_dval, t),
150
Semantics of SCADE/LUSTRE
151
@(r, j))); def COMP_ENV = \(r1 : Env, r2 : Env) \(i : Identifier) let l = @(r2, i) in @(GIF_THEN_ELSE, SDDval, @(SDDVAL_EQ, l, ERR_SDDVAL), @(r2, i), @(r1, i)); dec ERR_ENV : Env;
9.2.2.3
Predicate Functions of Env
def IS_IN_ENV = \(r : Env, i : Identifier) let t = @(r, i) in @(GIF_THEN_ELSE, Bool, @(SDDVAL_EQ, t, ERR_SDDVAL), FF, TT);
9.3
Semantics of Expressions
9.3.1
Semantics of Uninary Operator
9.3.1.1
Semantics of Logical Negation
The semantic function DO NOT requires that the type of argument of logical negation must be Bool. dec DO_NOT : [@(TStream, SDVal) -> @(TStream, SDVal)]; def DO_NOT = \(s : @(TStream, SDVal)) let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, SDVal, Bool, t), tt = @(PJ2, SDVal, Bool, t) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_SDVAL_BOOL, ht),
Semantics of SCADE/LUSTRE
152
let x = @(MK_SDVAL_BOOL, @(NOT, @(GET_SDVAL_BOOL, ht))) in , @(TSNIL, SDVal));
9.3.1.2
Semantics of Arithmetic Negation
The semantic function DO NEG requires that the type of argument of arithmetic negation must be Int. dec DO_NEG : [@(TStream, SDVal) -> @(TStream, SDVal)]; def DO_NEG = \(s : @(TStream, SDVal)) let t = @(FST, s), r = @(SND, s) in let ht = @(PJ1, SDVal, Bool, t), tt = @(PJ2, SDVal, Bool, t) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_SDVAL_INT, ht), let x = @(MK_SDVAL_INT, @(INEG, @(GET_SDVAL_INT, ht))) in , @(TSNIL, SDVal));
9.3.1.3
Semantics of Uninary Operators
We then have the semantic function DO UNOP defined on a given value stream. dec DO_UNOP : [Unop -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_UNOP = \(o : Unop, s : @(TStream, SDVal)) @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_UNNOT, o), @(DO_NOT, s), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_UNNEG, o), @(DO_NEG, s), @(TSNIL, SDVal)));
And the semantic function DO UNOP LIST defined on a given list of value streams. dec DO_UNOP_LIST : [Unop -> SDVal_list -> SDVal_list];
Semantics of SCADE/LUSTRE
153
def DO_UNOP_LIST = \(o : Unop, svl : SDVal_list) @(GIF_THEN_ELSE, SDVal_list, @(NOT, @(IS_EMPTY_SDL, svl)), let sv = @(HEAD_SDL, svl), sl = @(TAIL_SDL, svl) in let nv = @(DO_UNOP, o, sv), nl = @(DO_UNOP_LIST, o, sl) in @(CONS_SDVAL_LIST, nv, nl), NIL_SDVAL_LIST);
9.3.2
Semantics of Binary Operator
9.3.2.1 Semantics of + The semantic function DO PLUS requires that the type of two arguments of plus operator must be Int. dec DO_PLUS : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_PLUS = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_INT, ht1), @(IS_SDVAL_INT, ht2)), let x = @(MK_SDVAL_INT, @(IADD, @(GET_SDVAL_INT, ht1), @(GET_SDVAL_INT, ht2))) in , @(TSNIL, SDVal));
9.3.2.2 Semantics of The semantic function DO MINUS requires that the type of two arguments of minus operator must be Int. dec DO_MINUS : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)];
Semantics of SCADE/LUSTRE
154
def DO_MINUS = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_INT, ht1), @(IS_SDVAL_INT, ht2)), let x = @(MK_SDVAL_INT, @(IMINUS, @(GET_SDVAL_INT, ht1), @(GET_SDVAL_INT, ht2))) in , @(TSNIL, SDVal));
9.3.2.3 Semantics of * The semantic function DO TIMES requires that the type of two arguments of multiplication operator must be Int. dec DO_TIMES : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_TIMES = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_INT, ht1), @(IS_SDVAL_INT, ht2)), let x = @(MK_SDVAL_INT, @(ITIMES, @(GET_SDVAL_INT, ht1), @(GET_SDVAL_INT, ht2))) in , @(TSNIL, SDVal));
9.3.2.4 Semantics of / The semantic function DO DIV requires that the type of two arguments of division operator must be Int. dec DO_DIV :
Semantics of SCADE/LUSTRE
155
[@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_DIV = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_INT, ht1), @(IS_SDVAL_INT, ht2)), let x = @(MK_SDVAL_INT, @(IDIV, @(GET_SDVAL_INT, ht1), @(GET_SDVAL_INT, ht2))) in , @(TSNIL, SDVal));
9.3.2.5 Semantics of mod The semantic function DO MOD requires that the type of two arguments of models operator must be Int. dec DO_MOD : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_MOD = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_INT, ht1), @(IS_SDVAL_INT, ht2)), let x = @(MK_SDVAL_INT, @(IMOD, @(GET_SDVAL_INT, ht1), @(GET_SDVAL_INT, ht2))) in , @(TSNIL, SDVal));
9.3.2.6 Semantics of = The semantic function DO EQ requires that the type of two arguments of equal operator must be the same.
Semantics of SCADE/LUSTRE
156
dec DO_EQ : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_EQ = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_INT, ht1), @(IS_SDVAL_INT, ht2)), let x = @(MK_SDVAL_BOOL, @(IEQ, @(GET_SDVAL_INT, ht1), @(GET_SDVAL_INT, ht2))) in , @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_BOOL, ht1), @(IS_SDVAL_BOOL, ht2)), let x = @(MK_SDVAL_BOOL, @(BEQ, @(GET_SDVAL_BOOL, ht1), @(GET_SDVAL_BOOL, ht2))) in , @(TSNIL, SDVal)));
9.3.2.7 Semantics of The semantic function DO UNEQ requires that the type of two arguments of unequal operator must be the same. dec DO_UNEQ : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_UNEQ = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_INT, ht1), @(IS_SDVAL_INT, ht2)), let x = @(MK_SDVAL_BOOL, @(NOT,
Semantics of SCADE/LUSTRE
157
@(IEQ, @(GET_SDVAL_INT, ht1), @(GET_SDVAL_INT, ht2)))) in , @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_BOOL, ht1), @(IS_SDVAL_BOOL, ht2)), let x = @(MK_SDVAL_BOOL, @(NOT, @(BEQ, @(GET_SDVAL_BOOL, ht1), @(GET_SDVAL_BOOL, ht2)))) in , @(TSNIL, SDVal)));
9.3.2.8 Semantics of < The semantic function DO LT requires that the type of two arguments of less-than operator must be the same. dec DO_LT : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_LT = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_INT, ht1), @(IS_SDVAL_INT, ht2)), let x = @(MK_SDVAL_BOOL, @(ILS, @(GET_SDVAL_INT, ht1), @(GET_SDVAL_INT, ht2))) in , @(TSNIL, SDVal));
9.3.2.9 Semantics of @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_LE =
Semantics of SCADE/LUSTRE
158
\(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_INT, ht1), @(IS_SDVAL_INT, ht2)), let x = @(MK_SDVAL_BOOL, @(ILE, @(GET_SDVAL_INT, ht1), @(GET_SDVAL_INT, ht2))) in , @(TSNIL, SDVal));
9.3.2.10 Semantics of > The semantic function DO GT requires that the type of two arguments of greaterthan operator must be the same. dec DO_GT : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_GT = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_INT, ht1), @(IS_SDVAL_INT, ht2)), let x = @(MK_SDVAL_BOOL, @(ILS, @(GET_SDVAL_INT, ht2), @(GET_SDVAL_INT, ht1))) in , @(TSNIL, SDVal));
9.3.2.11 Semantics of >= The semantic function DO GE requires that the type of two arguments of greaterthan and equal operator must be the same. dec DO_GE : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)];
Semantics of SCADE/LUSTRE
159
def DO_GE = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_INT, ht1), @(IS_SDVAL_INT, ht2)), let x = @(MK_SDVAL_BOOL, @(ILE, @(GET_SDVAL_INT, ht2), @(GET_SDVAL_INT, ht1))) in , @(TSNIL, SDVal));
9.3.2.12 Semantics of and The semantic function DO AND requires that the type of two arguments of and operator must be Bool. dec DO_AND : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_AND = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_BOOL, ht1), @(IS_SDVAL_BOOL, ht2)), let x = @(MK_SDVAL_BOOL, @(AND, @(GET_SDVAL_BOOL, ht1), @(GET_SDVAL_BOOL, ht2))) in , @(TSNIL, SDVal));
9.3.2.13 Semantics of or The semantic function DO OR requires that the type of two arguments of or operator must be Bool.
Semantics of SCADE/LUSTRE
160
dec DO_OR : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_OR = \(s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) let t1 = @(FST, s1), r1 = @(SND, s1), t2 = @(FST, s2), r2 = @(SND, s2) in let ht1 = @(PJ1, SDVal, Bool, t1), tt1 = @(PJ2, SDVal, Bool, t1), ht2 = @(PJ1, SDVal, Bool, t2), tt2 = @(PJ2, SDVal, Bool, t2) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(AND, @(IS_SDVAL_BOOL, ht1), @(IS_SDVAL_BOOL, ht2)), let x = @(MK_SDVAL_BOOL, @(AND, @(GET_SDVAL_BOOL, ht1), @(GET_SDVAL_BOOL, ht2))) in , @(TSNIL, SDVal));
9.3.2.14
Semantics of Binary Operators
We then have the semantic function DO BINOP defined on two given value streams. dec DO_BINOP : [Binop -> @(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_BINOP = \(o : Binop, s1 : @(TStream, SDVal), s2 : @(TStream, SDVal)) @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_PLUS, o), @(DO_PLUS, s1, s2), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_MINUS, o), @(DO_MINUS, s1, s2), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_TIMES, o), @(DO_TIMES, s1, s2), @(GIF_THEN_ELSE0,
Semantics of SCADE/LUSTRE
161
@(TStream, SDVal), @(IS_DIV, o), @(DO_DIV, s1, s2), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_MOD, o), @(DO_MOD, s1, s2), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_EQ, o), @(DO_EQ, s1, s2), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_UNEQ, o), @(DO_UNEQ, s1, s2), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_LT, o), @(DO_LT, s1, s2), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_LE, o), @(DO_LE, s1, s2), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_GT, o), @(DO_GT, s1, s2), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_GE, o), @(DO_GE, s1, s2), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_ANDSS, o), @(DO_AND, s1, s2), @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_ORS, o), @(DO_OR, s1, s2), @(TSNIL, SDVal))))))))))))));
And the semantic function DO BINOP LIST defined on two given list of value streams. dec DO_BINOP_LIST : [Binop -> SDVal_list -> SDVal_list -> SDVal_list];
Semantics of SCADE/LUSTRE
162
def DO_BINOP_LIST = \(o : Binop, svl1 : SDVal_list, svl2 : SDVal_list) @(GIF_THEN_ELSE, SDVal_list, @(AND, @(NOT, @(IS_EMPTY_SDL, svl1)), @(NOT, @(IS_EMPTY_SDL, svl2))), let sv1 = @(HEAD_SDL, svl1), sl1 = @(TAIL_SDL, svl1), sv2 = @(HEAD_SDL, svl2), sl2 = @(TAIL_SDL, svl2) in let nv = @(DO_BINOP, o, sv1, sv2), nl = @(DO_BINOP_LIST, o, sl1, sl2) in @(CONS_SDVAL_LIST, nv, nl), NIL_SDVAL_LIST);
9.3.3
Semantics of PRE Operator
pre (“previous”) acts as a memory: if (e1 , e2 , . . ., en , . . .) is the sequence of values of expression E, pre(E) has the same clock as E, and its sequence of value is (nil, e1 , e2 , . . ., en , . . .), where nil represents an undefined value denoting an uninitialized memory. More formally: { nil for n = 0 pre(xn ) = xn−1 for n > 0 dec DO_PRE : [@(TStream, SDVal) -> @(TStream, SDVal)]; def DO_PRE = \(s : @(TStream, SDVal)) @(TPRES, SDVal, s); dec DO_PRE_LIST : [SDVal_list -> SDVal_list]; def DO_PRE_LIST = \(svl : SDVal_list) @(GIF_THEN_ELSE, SDVal_list, @(IS_EMPTY_SDL, svl), NIL_SDVAL_LIST, let sv = @(HEAD_SDL, svl), sl = @(TAIL_SDL, svl) in let nv = @(DO_PRE, sv), nl = @(DO_PRE_LIST, sl) in
Semantics of SCADE/LUSTRE
163
@(CONS_SDVAL_LIST, nv, nl));
9.3.4
Semantics of INIT Operator
The INIT “->” operator defines initial values: if x and y are flows of the same type, x -> y is the flow that is equal to x in the first step and equal to y thereafter. If E and F are expressions with the same clock, with respective sequences (e1 , e2 , . . ., en , . . .) and (f1 , f2 , . . ., fn , . . .), then E -> F is an expression with the same clock as E and F, and whose sequence is (e1 , f2 , . . ., fn , . . .). In other words, E -> F is always equal to F, but at the first time of its clock, it will equal to E. More formally: { x0 for n = 0 (x −> y)n = yn for n > 0 dec DO_INIT : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_INIT = \(fi : @(TStream, SDVal), ff : @(TStream, SDVal)) @(INIT, SDVal, fi, ff); dec DO_INIT_LIST : [SDVal_list -> SDVal_list -> SDVal_list]; def DO_INIT_LIST = \(fil : SDVal_list, ffl : SDVal_list) @(GIF_THEN_ELSE, SDVal_list, @(IS_EMPTY_SDL, fil), NIL_SDVAL_LIST, let hfil = @(HEAD_SDL, fil), tfil = @(TAIL_SDL, fil), hffl = @(HEAD_SDL, ffl), tffl = @(TAIL_SDL, ffl) in let nv = @(DO_INIT, hfil, hffl), nl = @(DO_INIT_LIST, tfil, tffl) in @(CONS_SDVAL_LIST, nv, nl));
The pre and -> operators provide the essential descriptive power of the language. For instance, edge nat
= false -> (c and not pre(c)); = 0 -> pre(nat) + 1;
Semantics of SCADE/LUSTRE
164
edgecount = 0 -> if edge then pre(edgecount) + 1 else pre(edgecount); defines edge to be true whenever the Boolean flow c has a rising edge, nat to be the step counter (natn = n), and edgecount to count the number of rising edges in c. LUSTRE definitions can be recursive (e.g., nat depends on pre(nat)), but the language requires that a variable can only depend a past values of itself. These two temporal operators pre and -> make it possible to describe sequential functions.
9.3.5
Semantics of WHEN Operator
when “samples” an expression according to a slower clock: if E is an expression and B is a Boolean expression with the same clock, then E when B is an expression whose clock is defined by B, and whose sequence is extracted from the one of E by keeping only those values of indexes corresponding to true values in the sequence of B. In other words, it is the sequence of values of E when B is true. dec DO_WHEN : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal)]; def DO_WHEN = \(s : @(TStream, SDVal), b : @(TStream, SDVal)) let hs = @(FST, s), ts = @(SND, s), hb = @(FST, b), tb = @(SND, b) in let hhs = @(PJ1, SDVal, Bool, hs), ths = @(PJ2, SDVal, Bool, hs) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(GET_SDVAL_BOOL, @(PJ1, SDVal, Bool, hb)), , @(DO_WHEN, ts, tb)); dec DO_WHEN_LIST : [SDVal_list -> SDVal_list -> SDVal_list]; def DO_WHEN_LIST = \(fil : SDVal_list, ffl : SDVal_list) @(GIF_THEN_ELSE, SDVal_list, @(IS_EMPTY_SDL, fil), NIL_SDVAL_LIST, let hfil = @(HEAD_SDL, fil), tfil = @(TAIL_SDL, fil),
Semantics of SCADE/LUSTRE
165
hffl = @(HEAD_SDL, ffl) in let nv = @(DO_WHEN, hfil, hffl), nl = @(DO_WHEN_LIST, tfil, ffl) in @(CONS_SDVAL_LIST, nv, nl));
9.3.6
Semantics of CURRENT Operator
current “interpolates” an expression on the clock immediately faster than its own. Let E be an expression whose clock is not the basic one, and let B be the Boolean expression defining this clock. Then current E has the same clock C that B has, and its value at any time of this clock C, is the value of E at the last time when B was true. B X Y := X when B Z := current Y
false x1 nil
true x2 x2 x2
false x3 x2
true x4 x4 x4
false x5
false x6
x4
x4
Table 9.1: Sampling and Interpolating
dec DO_CUR : [@(TStream, SDVal) -> @(TStream, SDVal)]; def DO_CUR = \(s : @(TStream, SDVal)) @(TCURRENT, SDVal, s); dec DO_CUR_LIST : [SDVal_list -> SDVal_list]; def DO_CUR_LIST = \(svl : SDVal_list) @(GIF_THEN_ELSE, SDVal_list, @(IS_EMPTY_SDL, svl), NIL_SDVAL_LIST, let sv = @(HEAD_SDL, svl), sl = @(TAIL_SDL, svl) in let nv = @(DO_CUR, sv), nl = @(DO_CUR_LIST, sl) in @(CONS_SDVAL_LIST, nv, nl));
true x7 x7 x7
true x8 x8 x8
Semantics of SCADE/LUSTRE
9.3.7
166
Semantics of IF Operator
The IF operator is a typical if-then-else construct in many programming languages. dec DO_IF : [@(TStream, @(TStream, @(TStream, @(TStream,
SDVal) -> SDVal) -> SDVal) -> SDVal)];
def DO_IF = \(b : @(TStream, SDVal), s : @(TStream, SDVal), t : @(TStream, SDVal)) let hb = @(FST, b), tb = @(SND, b), hs = @(FST, s), ts = @(SND, s), ht = @(FST, t), tt = @(SND, t) in @(GIF_THEN_ELSE0, @(TStream, SDVal), @(GET_SDVAL_BOOL, @(PJ1, SDVal, Bool, hb)), , ); dec DO_IF_LIST : [SDVal_list -> SDVal_list -> SDVal_list -> SDVal_list]; def DO_IF_LIST = \(ifl : SDVal_list, thl : SDVal_list, ell : SDVal_list) @(GIF_THEN_ELSE, SDVal_list, @(IS_EMPTY_SDL, ifl), NIL_SDVAL_LIST, let hifl = @(HEAD_SDL, ifl), tifl = @(TAIL_SDL, ifl), hthl = @(HEAD_SDL, thl), tthl = @(TAIL_SDL, thl), hell = @(HEAD_SDL, ell), tell = @(TAIL_SDL, ell) in let nv = @(DO_IF, hifl, hthl, hell), nl = @(DO_IF_LIST, tifl, tthl, tell) in @(CONS_SDVAL_LIST, nv, nl));
9.3.8
Semantics of Boolean Expressions
DO BOOL creates a stream of Boolean value. dec DO_BOOL : [Bool -> @(TStream, SDVal)]; def DO_BOOL =
Semantics of SCADE/LUSTRE
\(b : Bool) let nv = @(ANDS, SDVal, Bool, @(MK_SDVAL_BOOL, b), TT) in ;
DO BOOL LIST creates a list of stream of Boolean value. dec DO_BOOL_LIST : [Bool -> SDVal_list]; def DO_BOOL_LIST = \(b : Bool) let nv = @(DO_BOOL, b), nl = NIL_SDVAL_LIST in @(CONS_SDVAL_LIST, nv, nl);
9.3.9
Semantics of Integer Expressions
DO INT creates a stream of integer value. dec DO_INT : [Int -> @(TStream, SDVal)]; def DO_INT = \(b : Int) let nv = @(ANDS, SDVal, Bool, @(MK_SDVAL_INT, b), TT) in ; {\tt DO\_INT} creates a stream of integer value.
DO INT LIST creates a list of stream of integer value. dec DO_INT_LIST : [Int -> SDVal_list]; def DO_INT_LIST = \(b : Int) let nv = @(DO_INT, b), nl = NIL_SDVAL_LIST in @(CONS_SDVAL_LIST, nv, nl);
9.3.10
Semantics of Project Operator
DO PROJ selects a value stream by a given index i on a list of value stream. dec DO_PROJ : [SDVal_list -> Nat -> @(TStream, SDVal)]; def DO_PROJ = \(sl : SDVal_list, i : Nat)
167
Semantics of SCADE/LUSTRE
168
@(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_EMPTY_SDL, sl), ERR_TSDVAL, @(GIF_THEN_ELSE0, @(TStream, SDVal), @(IS_ZERO, i), @(HEAD_SDL, sl), @(DO_PROJ, @(TAIL_SDL, sl), @(PP, i))));
9.3.11
Semantics of Expressions
Putting all together, the semantics of Expression is recursively defined as follows: dec SDEXPR : [Expression -> Env -> SDVal_list]; dec SDEXPRLIST : [Expr_list -> Env -> SDVal_list]; def SDEXPR = \(e : Expression, r : Env) @(GIF_THEN_ELSE, SDVal_list, @(IS_BOOL_EXPR, e), %-- Boolean Expressions --% let b = @(GET_EXPR_BOOL, e) in @(DO_BOOL_LIST, b), @(GIF_THEN_ELSE, SDVal_list, %-- PRE Expression --% @(IS_PRE_EXPR, e), let pe = @(GET_EXPR_PRE, e) in let svl = @(SDEXPR, pe, r) in @(DO_PRE_LIST, svl), @(GIF_THEN_ELSE, SDVal_list, %-- INIT --% @(IS_INIT_EXPR, e), let fi = @(GET_INIT_INIT, e), ff = @(GET_INIT_FOLLOW, e) in let fisvl = @(SDEXPR, fi, r), ffsvl = @(SDEXPR, ff, r) in @(DO_INIT_LIST, fisvl, ffsvl), @(GIF_THEN_ELSE, SDVal_list, %-- WHEN --%
Semantics of SCADE/LUSTRE
@(IS_WHEN_EXPR, e), let fi = @(GET_WHEN_EXPR, e), ff = @(GET_WHEN_COND, e) in let fisvl = @(SDEXPR, fi, r), ffsvl = @(SDEXPR, ff, r) in @(DO_WHEN_LIST, fisvl, ffsvl), @(GIF_THEN_ELSE, SDVal_list, %-- CURRENT --% @(IS_CURRENT_EXPR, e), let pe = @(GET_EXPR_CUR, e) in let svl = @(SDEXPR, pe, r) in @(DO_CUR_LIST, svl), @(GIF_THEN_ELSE, SDVal_list, %-- Tuple --% @(IS_TUPLE_EXPR, e), let el = @(GET_EXPR_TUPLE, e) in let svl = @(SDEXPRLIST, el, r) in svl, @(GIF_THEN_ELSE, SDVal_list, %-- Function Call --% @(IS_CALL_EXPR, e), let id = @(GET_CALL_IDENT, e), el = @(GET_CALL_ELIST, e) in let t = @(r, id) in @(WHEN, SDVal_list, Proc_dval, SDVal_list, t, \(a1 : SDVal_list) NIL_SDVAL_LIST, \(a2 : Proc_dval) @(a2, r, el)), @(GIF_THEN_ELSE, SDVal_list, @(IS_IDENT_EXPR, e), %-- Identifier --% let id = @(GET_EXPR_IDENT, e) in let t = @(r, id) in @(WHEN, SDVal_list, Proc_dval, SDVal_list, t, \(a1 : SDVal_list) a1,
169
Semantics of SCADE/LUSTRE
\(a2 : Proc_dval) NIL_SDVAL_LIST), @(GIF_THEN_ELSE, SDVal_list, %-- Null --% @(IS_NIL_EXPR, e), NIL_SDVAL_LIST, @(GIF_THEN_ELSE, SDVal_list, @(IS_INT_EXPR, e), %-- Integer Expressions --% let i = @(GET_EXPR_INT, e) in @(DO_INT_LIST, i), @(GIF_THEN_ELSE, SDVal_list, %-- IF --% @(IS_IF_EXPR, e), let if = @(GET_IF_COND, e), th = @(GET_IF_THEN, e), el = @(GET_IF_ELSE, e) in let ifsvl = @(SDEXPR, if, r), thsvl = @(SDEXPR, th, r), elsvl = @(SDEXPR, el, r) in @(DO_IF_LIST, ifsvl, thsvl, elsvl), @(GIF_THEN_ELSE, SDVal_list, %-- PROJ --% @(IS_POS_EXPR, e), let pos = @(GET_POS_EXPR, e), idx = @(GET_POS_INDEX, e) in let sl = @(SDEXPR, pos, r) in @(CONS_SDVAL_LIST, @(DO_PROJ, sl, idx), NIL_SDVAL_LIST), @(GIF_THEN_ELSE, SDVal_list, %-- BINOP --% @(IS_BINOP_EXPR, e), let binop = @(GET_EXPR_BIN_OP, e), lexp = @(GET_EXPR_BIN_LEXP, e), rexp = @(GET_EXPR_BIN_REXP, e) in let lsvl = @(SDEXPR, lexp, r), rsvl = @(SDEXPR, rexp, r) in @(DO_BINOP_LIST, binop, lsvl, rsvl), @(GIF_THEN_ELSE, SDVal_list, %-- UNOP --% @(IS_UNOP_EXPR, e), let unop = @(GET_EXPR_UNOP, e),
170
Semantics of SCADE/LUSTRE
171
exp = @(GET_EXPR_UNEXP, e) in let svl = @(SDEXPR, exp, r) in @(DO_UNOP_LIST, unop, svl), NIL_SDVAL_LIST))))))))))))));
Again, the semantics on Expr list defined as SDEXPRLIST is recursively given as follows: def SDEXPRLIST = \(l : Expr_list, r : Env) @(GIF_THEN_ELSE, SDVal_list, @(IS_EMPTY, Expression, l), NIL_SDVAL_LIST, let hl = @(HEAD, Expression, l), tl = @(TAIL, Expression, l) in let sdv = @(SDEXPR, hl, r), sdl = @(SDEXPRLIST, tl, r) in @(CONCAT_SDL, sdv, sdl));
9.4 9.4.1
Semantics of Equations The Utility Functions
To describe the semantics of equations, we need two auxiliary functions: DO SDEQ and DO SDEQ LIST. The function DO SDEQ is used to create a new environment by a environment r, an identifier id and a new binding value stream sv. def DO_SDEQ = \(id : Identifier, sv : @(TStream, SDVal), r : Env) @(ENTER_LOC_ENV, r, id, @(CONS_SDVAL_LIST, sv, NIL_SDVAL_LIST));
The function DO SDEQ LIST is recursively defined on the list of identifiers and value stream. dec DO_SDEQ_LIST : [Ident_list -> SDVal_list -> Env -> Env]; def DO_SDEQ_LIST = \(idl : Ident_list, svl : SDVal_list, r : Env) @(GIF_THEN_ELSE, Env, @(IS_EMPTY_ILST, idl),
Semantics of SCADE/LUSTRE
172
r, let id = @(HEAD_ILST, idl), il = @(TAIL_ILST, idl), sv = @(HEAD_SDL, svl), sl = @(TAIL_SDL, svl) in let nr = @(DO_SDEQ, id, sv, r) in @(DO_SDEQ_LIST, il, sl, nr));
9.4.2
Semantic Function of Equation
The semantic function of equation SDEQ is given as follows: def SDEQ = \(e : Equation, r : Env) let idl = @(GET_EQ_IDLIST, e), eqb = @(GET_EQ_BODY, e) in let svl = @(SDEXPR, eqb, r) in @(DO_SDEQ_LIST, idl, svl, r);
9.4.3
Semantic Function of Eq list
The semantic function of equation list SDEQ LIST is recursively defined as follows: dec SDEQ_LIST : [Eq_list -> Env -> Env]; def SDEQ_LIST = \(el : Eq_list, r : Env) @(GIF_THEN_ELSE, Env, @(IS_EMPTY_EQ_LIST, el), r, let ne = @(HEAD_EQ_LIST, el), nel = @(TAIL_EQ_LIST, el) in let nr = @(SDEQ, ne, r) in @(SDEQ_LIST, nel, nr));
9.5 9.5.1
Semantics of Nodes Structured Programming in LUSTRE
LUSTRE provides the notion of a node to help structure program. A node is a function of flows: a node takes a number of typed input flows and defines a number of output flows by means of a system of equations that can also use local flows. Each output or local flow must be defined by exactly one equation.
Semantics of SCADE/LUSTRE
173
The order of equations is irrelevant. For instance, a reset-table event counter could be written node COUNT(event, reset: bool) returns (count: int); let count = if (true -> reset) then 0 else if event then pre(count)+1 else pre(count); tel Using this node, our previous definition of edgecount could be replaced by edgecount = COUNT(edge, false);
9.5.2
Parameter Elaboration Functions
The parameter elaboration action for node is given in utility functions ENTRY BIND and ENTRY BIND LIST. dec RIF_THEN_ELSE : [Bool -> Env -> Env -> Env]; def RIF_THEN_ELSE = @(GIF_THEN_ELSE, Env); dec ENTRY_BIND_LIST : [Input_par_list -> Expr_list -> Env -> Env];
The ENTRY BIND is used to bind an input parameter p with an expression e. dec ENTRY_BIND : [Input_par -> Expression -> Env -> Env]; def ENTRY_BIND = \(p : Input_par, e : Expression, r : Env) @(RIF_THEN_ELSE, @(IS_INPUT_PAR, p), let i = @(GET_INPUT_IDEN, p), t = @(GET_INPUT_TYPE, p) in @(ENTER_LOC_ENV, r, i, @(SDEXPR, e, r)), let i = @(GET_WINPUT_IDEN, p), t = @(GET_WINPUT_TYPE, p), w = @(GET_WINPUT_WHEN, p) in @(ENTER_LOC_ENV,
Semantics of SCADE/LUSTRE
174
r, i, @(SDEXPR, @(MK_WHEN, e, @(MK_IDENT, w)), r)));
The ENTRY BIND LIST is used to bind an input parameter list il with an expression list al. def ENTRY_BIND_LIST = \(il : Input_par_list) \(al : Expr_list, r : Env) @(RIF_THEN_ELSE, @(IS_EMP_INPAR_LIST, il), %-- Empty Binding List --% @(RIF_THEN_ELSE, @(IS_EMP_ELIST, al), r, ERR_ENV), %-- Non-Empty Binding List --% @(RIF_THEN_ELSE, @(IS_EMP_ELIST, al), ERR_ENV, let i = @(HEAD_INPAR_LIST, il), nil = @(TAIL_INPAR_LIST, il), a = @(HEAD_ELIST, al), nal = @(TAIL_ELIST, al) in @(ENTRY_BIND, i, a, @(ENTRY_BIND_LIST, nil, nal, r))));
9.5.3
Semantics of Local Declarations
The SDLOCAL is used to define the semantics of local declaration in a given node. dec SDLOCAL : [Local -> Env -> Env]; def SDLOCAL = \(p : Local, r : Env) @(RIF_THEN_ELSE, @(IS_LOCAL, p), let i = @(GET_LOCAL_IDEN, p), t = @(GET_LOCAL_TYPE, p) in r, let i = @(GET_WLOCAL_IDEN, p), t = @(GET_WLOCAL_TYPE, p), w = @(GET_WLOCAL_WHEN, p) in
Semantics of SCADE/LUSTRE
175
@(ENTER_LOC_ENV, r, i, @(SDEXPR, @(MK_WHEN, @(MK_IDENT, i), @(MK_IDENT, w)), r)));
The SDLOCAL LIST is used to define the semantics of list of local declaration in a given node. dec SDLOCAL_LIST : [Local_list -> Env -> Env]; def SDLOCAL_LIST = \(ll : Local_list, r : Env) @(RIF_THEN_ELSE, @(IS_EMP_LLIST, ll), r, let nl = @(HEAD_LLIST, ll), nll = @(TAIL_LLIST, ll) in let nr = @(SDLOCAL, nl, r) in @(SDLOCAL_LIST, nll, nr));
9.5.4
Semantics of Output Declarations
The SDOUTPUT is used to define the semantics of output declaration in a given node. dec SDOUTPUT : [Output -> Env -> SDVal_list]; def SDOUTPUT = \(p : Output, r : Env) @(GIF_THEN_ELSE, SDVal_list, @(IS_OUTPUT, p), let i = @(GET_OUTPUT_IDEN, p), t = @(GET_OUTPUT_TYPE, p) in @(SDEXPR, @(MK_IDENT, i), r), let i = @(GET_WOUTPUT_IDEN, p), t = @(GET_WOUTPUT_TYPE, p), w = @(GET_WOUTPUT_WHEN, p) in @(SDEXPR, @(MK_WHEN, @(MK_IDENT, i), @(MK_IDENT, w)), r));
The SDOUTPUT LIST is used to define the semantics of list of output declaration in a given node. dec SDOUTPUT_LIST : [Output_list -> Env -> SDVal_list]; def SDOUTPUT_LIST =
Semantics of SCADE/LUSTRE
176
\(ol : Output_list, r : Env) @(GIF_THEN_ELSE, SDVal_list, @(IS_EMP_OLIST, ol), NIL_SDVAL_LIST, let no = @(HEAD_OLIST, ol), nol = @(TAIL_OLIST, ol) in let vl = @(SDOUTPUT, no, r), nvl = @(SDOUTPUT_LIST, nol, r) in @(CONCAT_SDL, vl, nvl));
9.5.5
Semantics of Nodes
The SDNODE is used to define the semantics of node declaration. def SDNODE = \(n : Node_Def, r : Env) let head = @(GET_NODE_HEAD, n), llst = @(GET_NODE_LLST, n), eqls = @(GET_NODE_EQLS, n) in let name = @(GET_HEAD_NAME, head), inls = @(GET_HEAD_INLS, head), ouls = @(GET_HEAD_OULS, head) in let procval = \(rr : Env, el : Expr_list) let nr = @(ENTRY_BIND_LIST, inls, el, @(SDEQ_LIST, eqls, rr)) in @(SDOUTPUT_LIST, ouls, nr) in @(ENTER_PRO_ENV, r, name, procval);
Several other features depend on the tools and compiler used: 1. Structured types - Real applications make use of structured data. This can be delegated to the host language (imported types and functions, written in C). LUSTRE V4 and the commercial SCADE tool offer record and array types within the language. 2. Clocks and activation conditions - It is often useful to activate some parts of a program at different rates. Data-flow synchronous languages provide this facility using clocks. In LUSTRE, a program has a basic clock, which is the finest notion of time (i.e., the external activation cycle). Some flows can follow slower clocks, i.e., have values only at certain steps of the basic clock: if x is a flow, and c is a Boolean flow, x when c is the flow whose sequence of values is the one of x when c is true. This new flow is on the clock c, meaning that it does not have value when c is false.
Semantics of SCADE/LUSTRE
177
The notion of clock has two components: • A static aspect, similar to a type mechanism: each flow has a clock, and there are constraints about the way flows can be combined. Most operators are required to operate on flows with the same clock; e.g., adding two flows with different clocks does not make sense and is flagged by the compiler. • A dynamic aspect: according to the data-flow philosophy, operators are only activated when their operands are present, i.e., when their (common) clock is true. Clocks are the only way to control the activation of different parts of the program. While the whole clock mechanism was offered in the original language, it appeared that actual users mainly need its dynamic aspect and consider the clock consistency constraints tedious. This is why the SCADE tool offers another mechanism called “activation conditions”: any operator or node can be associated an activation condition that specifies when the operator is activated. The outputs of such a node, however, are always available; when the condition is false, their values are either frozen or take default values before the first activation. By definition, a LUSTRE program may not contain syntactically cyclic definitions. The commercial SCADE tool provides an option for an even stronger constraint that forbids an output of a node to be fed back as an input without an intervening “pre” operator. Users generally accept these constraints. The first constraint ensures that there is a static dependence order between flows and allows a very simple generation of sequential code (the scheduling is just topological sorting). The second, stronger constraint enables separate compilation: when enforced, the execution order of equations in a node cannot be affected by how it is called and therefore the compiler can schedule each node individually before scheduling a system at the node level. Otherwise, the compiler would have to topologically sort every equation in the program.
Part IV
Implementation of SCADE/LUSTRE
178
Chapter 10
An Imperative Programming Language ToyC 10.1
From SCADE/LUSTRE to Implementation - Program Transformation Approach
Unlike the traditional implementation approaches for most of programming languages where there will be a compiler translating the programs written in the language to the machine codes, there is no compiler existed for SCADE/LUSTRE. The SCADE/LUSTRE programs will be translated into C or Ada programs and the machine executable code will be obtained by applying a C or Ada compiler. In this case, most of the traditional programming tools can still be used. For illustration purpose, a small programming language called ToyC is presented. Our intension here is to provide a vehicle for illustrating various formal concepts in use. ToyC is very similar to the language TINY introduced in [13]. ToyC is an imperative programming language. An imperative language is one which achieves its primary effect by changing the state of variables by assignment. All of these languages have in common a fundamental dependence upon variables which contain values and assignment of values to variables, usually by an explicit assignment operator. A language model, following the PowerEpsilon style, comprises the following components: 1. An ‘abstract syntax’ for ‘syntactic domains’, i.e. domains representing source programs.
179
Imperative Language ToyC
180
2. An abstract representation for ‘semantic domains’ modelling the ‘denotations’ of program constructs, comprising the program results. 3. PowerEpsilon predicates specifying the well-formedness constraints to be put on program objects. 4. Function definitions ascribing computational semantics to the source-language constructs. Ideally, there is one function to specify the semantics for the constructs of each major syntactic domains.
10.2
Syntax of ToyC
In specifying the syntax of a programming language for semantic analysis, it is convenient to be able to avoid some semantically irrelevant complications such as operator precedence and associativity by providing only an “abstract” form of syntax. In effect, an abstract syntax specifies the compositional structure of programs while leaving open some aspects of their concrete representations as strings of symbols. ToyC has three main kinds of constructs, declarations, expressions and Statements, all of them can contain identifiers which are strings of letters or digits beginning with a letter. The abstract syntax of ToyC is given in terms of PowerEpsilon as the following section which are quite different from the notations used in the traditional denotational semantics.
10.2.1
Declaration
10.2.1.1
Type Definition of Type Elem
def Type_Elem = !(T : Prop) [T -> T -> T];
10.2.1.2
Constructors of Type Elem
def BOOL_TYPE = \(T : Prop) \(x : T, y : T) x; def INT_TYPE = \(T : Prop) \(x : T, y : T) y; dec ERR_TYPE : Type_Elem;
Imperative Language ToyC
10.2.1.3
Predicate Functions of Type Elem
def IS_BOOL_TYPE = \(t : Type_Elem) @(t, Bool, TT, FF); def IS_INT_TYPE = \(t : Type_Elem) @(t, Bool, FF, TT);
10.2.1.4
Type Definition of Declaration
def Declaration = !(Ti : Prop) [Ti -> [Identifier -> Type_Elem -> Ti] -> [Ti -> Ti -> Ti] -> Ti];
10.2.1.5
Constructors of Declaration
def NULL_DECL = \(T : Prop) \(n : T) \(d : [Identifier -> Type_Elem -> T]) \(s : [T -> T -> T]) n; def DECL_ELEM = \(i : Identifier, t : Type_Elem) \(T : Prop) \(n : T) \(d : [Identifier -> Type_Elem -> T]) \(s : [T -> T -> T]) @(d, i, t); def DECL_SEQ = \(d1 : Declaration, d2 : Declaration) \(T : Prop) \(n : T) \(d : [Identifier -> Type_Elem -> T]) \(s : [T -> T -> T]) @(s, @(d1, T, n, d, s), @(d2, T, n, d, s));
181
Imperative Language ToyC
10.2.1.6
Predicate Functions of Declaration
def IS_NULL_DECL = \(d : Declaration) @(d, Bool, TT, \(i : Identifier, t : Type_Elem) FF, \(d1 : Bool, d2 : Bool) FF); def IS_DECL = \(d : Declaration) @(d, Bool, FF, \(i : Identifier, t : Type_Elem) TT, \(d1 : Bool, d2 : Bool) FF); def IS_DECL_SEQ = \(d : Declaration) @(d, Bool, FF, \(i : Identifier, t : Type_Elem) FF, \(d1 : Bool, d2 : Bool) TT);
10.2.1.7
Projectors of Declaration
dec ERR_TYPE_ELEM : Type_Elem; dec ERR_DECL : Declaration; def DECLT = \(x : Declaration, y : Declaration) x; def DECLF = \(x : Declaration, y : Declaration) y; def GET_DECL_IDEN = \(d : Declaration) @(d, Identifier, ERR_IDEN, \(i : Identifier, t : Type_Elem) i, \(d1 : Identifier, d2 : Identifier) ERR_IDEN); def GET_DECL_TYPE = \(d : Declaration)
182
Imperative Language ToyC
@(d, Type_Elem, ERR_TYPE_ELEM, \(i : Identifier, t : Type_Elem) t, \(d1 : Type_Elem, d2 : Type_Elem) ERR_TYPE_ELEM); def LEFT_DECL = \(n : Declaration) @(n, [[Declaration -> Declaration -> Declaration] -> Declaration], \(nu : [Declaration -> Declaration -> Declaration]) ERR_DECL, \(id : Identifier, te : Type_Elem, dc : [Declaration -> Declaration -> Declaration]) ERR_DECL, \(l : [[Declaration -> Declaration -> Declaration] -> Declaration], r : [[Declaration -> Declaration -> Declaration] -> Declaration], v : [Declaration -> Declaration -> Declaration]) @(v, @(DECL_SEQ, @(l, DECLT), @(r, DECLT)), @(l, DECLT)), DECLF); def RIGHT_DECL = \(n : Declaration) @(n, [[Declaration -> Declaration -> Declaration] -> Declaration], \(nu : [Declaration -> Declaration -> Declaration]) ERR_DECL, \(id : Identifier, te : Type_Elem, dc : [Declaration -> Declaration -> Declaration]) ERR_DECL, \(l : [[Declaration -> Declaration -> Declaration] -> Declaration], r : [[Declaration -> Declaration -> Declaration] -> Declaration], v : [Declaration -> Declaration -> Declaration]) @(v, @(DECL_SEQ, @(l, DECLT), @(r, DECLT)), @(r, DECLT)), DECLF);
10.2.1.8
Induction Rule of Declaration
dec IndDecl : !(P : [Declaration -> Env -> Type(0)]) [!(r : Env) @(P, NULL_DECL, r) -> !(i : Identifier, t : Type_Elem, r : Env)
183
Imperative Language ToyC
@(P, @(DECL_ELEM, i, t), r) -> !(d1 : Declaration, d2 : Declaration, r : Env) [@(P, d1, r) -> @(P, d2, r) -> @(P, @(DECL_SEQ, d1, d2), r)] -> !(d : Declaration, r : Env) @(P, d, r)];
10.2.2
Binary Operators
10.2.2.1
Type Definition of Binop
def Binop = !(T : Prop) [T -> T -> T -> T -> T -> T -> T -> T -> T -> T -> T -> T -> T -> T];
10.2.2.2
Constructors of Binop
def PLUS = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x0; def MINUS = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x1; def TIMES = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x2; def DIV = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x3; def MOD = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T,
184
Imperative Language ToyC
x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x4; def EQ = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x5; def UNEQ = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x6; def LT = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x7; def LE = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x8; def GT = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x9; def GE = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x10; def ANDS = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x11; def ORS = \(T : Prop) \(x0 : T, x1 : T, x2 : T, x3 : T, x4 : T, x5 : T, x6 : T, x7 : T, x8 : T, x9 : T, x10 : T, x11 : T, x12 : T) x12; dec ERR_BINOP : Binop;
185
Imperative Language ToyC
10.2.2.3
Predicate Functions of Binop
def IS_PLUS = \(o : Binop) @(o, Bool, TT, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF); def IS_MINUS = \(o : Binop) @(o, Bool, FF, TT, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF); def IS_TIMES = \(o : Binop) @(o, Bool, FF, FF, TT, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF); def IS_DIV = \(o : Binop) @(o, Bool, FF, FF, FF, TT, FF, FF, FF, FF, FF, FF, FF, FF, FF); def IS_MOD = \(o : Binop) @(o, Bool, FF, FF, FF, FF, TT, FF, FF, FF, FF, FF, FF, FF, FF); def IS_EQ = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, TT, FF, FF, FF, FF, FF, FF, FF); def IS_UNEQ = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, TT, FF, FF, FF, FF, FF, FF); def IS_LT = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, TT, FF, FF, FF, FF, FF); def IS_LE = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, FF, TT, FF, FF, FF, FF); def IS_GT = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, FF, FF, TT, FF, FF, FF); def IS_GE = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, TT, FF, FF); def IS_ANDS =
186
Imperative Language ToyC
\(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, TT, FF); def IS_ORS = \(o : Binop) @(o, Bool, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, FF, TT);
10.2.3
Uninary Operators
10.2.3.1
Type Definition of Unop
\small \begin{breakbox}\begin{verbatim} def Unop = !(T : Prop) [T -> T -> T];
10.2.3.2
Constructors of Unop
def UNNOT = \(T : Prop) \(x : T, y : T) x; def UNNEG = \(T : Prop) \(x : T, y : T) y; dec ERR_UNOP
10.2.3.3
: Unop;
Predicate Functions of Unop
def IS_UNNOT = \(o : Unop) @(o, Bool, TT, FF); def IS_UNNEG = \(o : Unop) @(o, Bool, FF, TT);
10.2.4
Expression
10.2.4.1
Type Definition of TExpression
dec TExpr_list : Prop; def TExpression = !(T : Prop) [T ->
187
Imperative Language ToyC
188
[Bool -> T] -> [Int -> T] -> [Identifier -> T] -> [Binop -> T -> T -> T] -> [Unop -> T -> T] -> [TExpr_list -> T] -> %-- Tuple --% [T -> Nat -> T] -> %-- Projection --% [T -> T -> T] -> [T -> T -> T -> T] -> T]; def TExpr_list = @(List, TExpression);
10.2.4.2
Constructors of TExpression
We specify a special function application expression. There is no semantic definition given. It is supposed to be the system call defined outside of the language. dec APPL_EXPR : [Identifier -> TExpr_list -> TExpression]; dec NULL_EXPR : TExpression; dec IF_EXPR : [TExpression -> TExpression -> TExpression -> TExpression]; dec BOOL_EXPR : [Bool -> TExpression]; def BOOL_EXPR = \(b : Bool) \(T : Prop) \(null : T, bool : [Bool -> T], int : [Int -> T], iden : [Identifier -> T], bin : [Binop -> T -> T -> T], un : [Unop -> T -> T], tupl : [TExpr_list -> T], pos : [T -> Nat -> T], ift : [T -> T -> T], if : [T -> T -> T -> T]) @(bool, b); def INT_EXPR = \(i : Int) \(T : Prop)
Imperative Language ToyC
\(null : T, bool : [Bool -> T], int : [Int -> T], iden : [Identifier -> T], bin : [Binop -> T -> T -> T], un : [Unop -> T -> T], tupl : [TExpr_list -> T], pos : [T -> Nat -> T], ift : [T -> T -> T], if : [T -> T -> T -> T]) @(int, i); dec IDEN_EXPR : [Identifier -> TExpression]; def IDEN_EXPR = \(i : Identifier) \(T : Prop) \(null : T, bool : [Bool -> T], int : [Int -> T], iden : [Identifier -> T], bin : [Binop -> T -> T -> T], un : [Unop -> T -> T], tupl : [TExpr_list -> T], pos : [T -> Nat -> T], ift : [T -> T -> T], if : [T -> T -> T -> T]) @(iden, i); def BIN_EXPR = \(o : Binop, e1 : TExpression, e2 : TExpression) \(T : Prop) \(null : T, bool : [Bool -> T], int : [Int -> T], iden : [Identifier -> T], bin : [Binop -> T -> T -> T], un : [Unop -> T -> T], tupl : [TExpr_list -> T], pos : [T -> Nat -> T], ift : [T -> T -> T], if : [T -> T -> T -> T]) @(bin, o, @(e1, T, null, bool, int, iden, bin, un, tupl, pos, ift, if), @(e2, T, null, bool, int, iden, bin, un, tupl, pos, ift, if));
189
Imperative Language ToyC
def UN_EXPR = \(o : Unop, e : TExpression) \(T : Prop) \(null : T, bool : [Bool -> T], int : [Int -> T], iden : [Identifier -> T], bin : [Binop -> T -> T -> T], un : [Unop -> T -> T], tupl : [TExpr_list -> T], pos : [T -> Nat -> T], ift : [T -> T -> T], if : [T -> T -> T -> T]) @(un, o, @(e, T, null, bool, int, iden, bin, un, tupl, pos, ift, if)); def IFT_EXPR = \(e1 : TExpression, e2 : TExpression) \(T : Prop) \(null : T, bool : [Bool -> T], int : [Int -> T], iden : [Identifier -> T], bin : [Binop -> T -> T -> T], un : [Unop -> T -> T], tupl : [TExpr_list -> T], pos : [T -> Nat -> T], ift : [T -> T -> T], if : [T -> T -> T -> T]) @(ift, @(e1, T, null, bool, int, iden, bin, un, tupl, pos, ift, if), @(e2, T, null, bool, int, iden, bin, un, tupl, pos, ift, if)); def IF_EXPR = \(e1 : TExpression, e2 : TExpression, e3 : TExpression) \(T : Prop) \(null : T, bool : [Bool -> T], int : [Int -> T], iden : [Identifier -> T], bin : [Binop -> T -> T -> T], un : [Unop -> T -> T], tupl : [TExpr_list -> T], pos : [T -> Nat -> T], ift : [T -> T -> T], if : [T -> T -> T -> T])
190
Imperative Language ToyC
@(if, @(e1, T, null, bool, int, iden, bin, un, tupl, pos, ift, if), @(e2, T, null, bool, int, iden, bin, un, tupl, pos, ift, if), @(e3, T, null, bool, int, iden, bin, un, tupl, pos, ift, if)); dec TUPLE_EXPR : [TExpr_list -> TExpression]; dec PROJ_EXPR : [TExpression -> Nat -> TExpression]; dec ERR_EXPR
10.2.4.3
: TExpression;
Predicate Functions of TExpression
def IS_BOOL_EXPR = \(e : TExpression) @(e, Bool, FF, \(b : Bool) TT, \(i : Int) FF, \(n : Identifier) FF, \(o : Binop, e1 : Bool, e2 : Bool) FF, \(u : Unop, e : Bool) FF, \(l : TExpr_list) FF, \(pos : Bool, n : Nat) FF, \(ife : Bool, the : Bool) FF, \(ife : Bool, the : Bool, ele : Bool) FF); def IS_INT_EXPR = \(e : TExpression) @(e, Bool, FF, \(b : Bool) FF, \(i : Int) TT, \(n : Identifier) FF, \(o : Binop, e1 : Bool, e2 : Bool) FF, \(u : Unop, e : Bool) FF, \(l : TExpr_list) FF, \(pos : Bool, n : Nat) FF, \(ife : Bool, the : Bool) FF, \(ife : Bool, the : Bool, ele : Bool) FF); def IS_IDEN_EXPR = \(e : TExpression)
191
Imperative Language ToyC
@(e, Bool, FF, \(b : \(i : \(n : \(o : \(u : \(l : \(pos \(ife \(ife
Bool) FF, Int) FF, Identifier) TT, Binop, e1 : Bool, e2 : Bool) FF, Unop, e : Bool) FF, TExpr_list) FF, : Bool, n : Nat) FF, : Bool, the : Bool) FF, : Bool, the : Bool, ele : Bool) FF);
def IS_BIN_EXPR = \(e : TExpression) @(e, Bool, FF, \(b : Bool) FF, \(i : Int) FF, \(n : Identifier) FF, \(o : Binop, e1 : Bool, e2 : Bool) TT, \(u : Unop, e : Bool) FF, \(l : TExpr_list) FF, \(pos : Bool, n : Nat) FF, \(ife : Bool, the : Bool) FF, \(ife : Bool, the : Bool, ele : Bool) FF); def IS_UN_EXPR = \(e : TExpression) @(e, Bool, FF, \(b : Bool) FF, \(i : Int) FF, \(n : Identifier) FF, \(o : Binop, e1 : Bool, e2 : Bool) FF, \(u : Unop, e : Bool) TT, \(l : TExpr_list) FF, \(pos : Bool, n : Nat) FF, \(ife : Bool, the : Bool) FF, \(ife : Bool, the : Bool, ele : Bool) FF);
192
Imperative Language ToyC
10.2.4.4
Projectors of TExpression
def EXPRT = \(x : TExpression, y : TExpression) x; def EXPRF = \(x : TExpression, y : TExpression) y; dec GET_EXPR_TUPLE : [TExpression -> TExpr_list]; dec GET_EXPR_POS : [TExpression -> TExpression]; dec GET_EXPR_IDX : [TExpression -> Nat]; def GET_EXPR_BOOL = \(e : TExpression) @(e, Bool, ERR_BOOL, \(b : Bool) b, \(i : Int) ERR_BOOL, \(n : Identifier) ERR_BOOL, \(o : Binop, e1 : Bool, e2 : Bool) ERR_BOOL, \(u : Unop, e : Bool) ERR_BOOL, \(l : TExpr_list) ERR_BOOL, \(pos : Bool, n : Nat) ERR_BOOL, \(ife : Bool, the : Bool) ERR_BOOL, \(ife : Bool, the : Bool, ele : Bool) ERR_BOOL); def GET_EXPR_INT = \(e : TExpression) @(e, Int, ERR_INT, \(b : Bool) ERR_INT, \(i : Int) i, \(n : Identifier) ERR_INT, \(o : Binop, e1 : Int, e2 : Int) ERR_INT, \(u : Unop, e : Int) ERR_INT, \(l : TExpr_list) ERR_INT, \(pos : Int, n : Nat) ERR_INT, \(ife : Int, the : Int) ERR_INT, \(ife : Int, the : Int, ele : Int) ERR_INT); def GET_EXPR_IDEN = \(e : TExpression) @(e, Identifier, ERR_IDEN, \(b : Bool) ERR_IDEN, \(i : Int) ERR_IDEN, \(n : Identifier) n, \(o : Binop, e1 : Identifier, e2 : Identifier) ERR_IDEN, \(u : Unop, e : Identifier) ERR_IDEN, \(l : TExpr_list) ERR_IDEN,
193
Imperative Language ToyC
\(pos : Identifier, n : Nat) ERR_IDEN, \(ife : Identifier, the : Identifier) ERR_IDEN, \(ife : Identifier, the : Identifier, ele : Identifier) ERR_IDEN); def GET_EXPR_BINOP = \(e : TExpression) @(e, Binop, ERR_BINOP, \(b : Bool) ERR_BINOP, \(i : Int) ERR_BINOP, \(n : Identifier) ERR_BINOP, \(o : Binop, e1 : Binop, e2 : Binop) o, \(u : Unop, e : Binop) ERR_BINOP, \(l : TExpr_list) ERR_BINOP, \(pos : Binop, n : Nat) ERR_BINOP, \(ife : Binop, the : Binop) ERR_BINOP, \(ife : Binop, the : Binop, ele : Binop) ERR_BINOP); def GET_EXPR_LEFT = \(e : TExpression) @(e, [[TExpression -> TExpression -> TExpression] -> TExpression], \(e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(b : Bool, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(i : Int, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(n : Identifier, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(o : Binop, e1 : [[TExpression -> TExpression -> TExpression] -> TExpression], e2 : [[TExpression -> TExpression -> TExpression] -> TExpression], v : [TExpression -> TExpression -> TExpression]) @(v, @(BIN_EXPR, o, @(e1, EXPRT), @(e2, EXPRT)), @(e1, EXPRT)), \(o : Unop, e : [[TExpression -> TExpression -> TExpression] -> TExpression], v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(l : TExpr_list, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(pos : [[TExpression -> TExpression -> TExpression] -> TExpression], n : Nat,
194
Imperative Language ToyC
v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(e1 : [[TExpression -> TExpression -> TExpression] -> TExpression], e2 : [[TExpression -> TExpression -> TExpression] -> TExpression], v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(e1 : [[TExpression -> TExpression -> TExpression] -> TExpression], e2 : [[TExpression -> TExpression -> TExpression] -> TExpression], e3 : [[TExpression -> TExpression -> TExpression] -> TExpression], v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, EXPRF); def GET_EXPR_RIGHT = \(e : TExpression) @(e, [[TExpression -> TExpression -> TExpression] -> TExpression], \(e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(b : Bool, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(i : Int, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(n : Identifier, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(o : Binop, e1 : [[TExpression -> TExpression -> TExpression] -> TExpression], e2 : [[TExpression -> TExpression -> TExpression] -> TExpression], v : [TExpression -> TExpression -> TExpression]) @(v, @(BIN_EXPR, o, @(e1, EXPRT), @(e2, EXPRT)), @(e2, EXPRT)), \(o : Unop, e : [[TExpression -> TExpression -> TExpression] -> TExpression], v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(l : TExpr_list, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(pos : [[TExpression -> TExpression -> TExpression] -> TExpression], n : Nat,
195
Imperative Language ToyC
v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(e1 : [[TExpression -> TExpression -> TExpression] -> TExpression], e2 : [[TExpression -> TExpression -> TExpression] -> TExpression], v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(e1 : [[TExpression -> TExpression -> TExpression] -> TExpression], e2 : [[TExpression -> TExpression -> TExpression] -> TExpression], e3 : [[TExpression -> TExpression -> TExpression] -> TExpression], v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, EXPRF); def GET_EXPR_UNOP = \(e : TExpression) @(e, Unop, ERR_UNOP, \(b : Bool) ERR_UNOP, \(i : Int) ERR_UNOP, \(n : Identifier) ERR_UNOP, \(o : Binop, e1 : Unop, e2 : Unop) ERR_UNOP, \(u : Unop, e : Unop) u, \(l : TExpr_list) ERR_UNOP, \(pos : Unop, n : Nat) ERR_UNOP, \(e1 : Unop, e2 : Unop) ERR_UNOP, \(e1 : Unop, e2 : Unop, e3 : Unop) ERR_UNOP); def GET_EXPR_UN = \(e : TExpression) @(e, [[TExpression -> TExpression -> TExpression] -> TExpression], \(e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(b : Bool, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(i : Int, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(n : Identifier, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(o : Binop, e1 : [[TExpression -> TExpression -> TExpression] -> TExpression], e2 : [[TExpression -> TExpression -> TExpression] ->
196
Imperative Language ToyC
TExpression], v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(o : Unop, e : [[TExpression -> TExpression -> TExpression] -> TExpression], v : [TExpression -> TExpression -> TExpression]) @(v, @(UN_EXPR, o, @(e, EXPRT)), @(e, EXPRT)), \(l : TExpr_list, e : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(pos : [[TExpression -> TExpression -> TExpression] -> TExpression], n : Nat, v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(e1 : [[TExpression -> TExpression -> TExpression] -> TExpression], e2 : [[TExpression -> TExpression -> TExpression] -> TExpression], v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, \(e1 : [[TExpression -> TExpression -> TExpression] -> TExpression], e2 : [[TExpression -> TExpression -> TExpression] -> TExpression], e3 : [[TExpression -> TExpression -> TExpression] -> TExpression], v : [TExpression -> TExpression -> TExpression]) ERR_EXPR, EXPRF); dec GET_EXPR_IF : [TExpression -> TExpression]; dec GET_EXPR_TH : [TExpression -> TExpression]; dec GET_EXPR_EL : [TExpression -> TExpression]; dec GET_EXPR_IFT : [TExpression -> TExpression]; dec GET_EXPR_THT : [TExpression -> TExpression];
10.2.4.5
Induction Rule of TExpression
dec IndExpr : !(P : [TExpression -> Env -> Type(0)]) [!(r : Env) @(P, NULL_EXPR) -> !(x : Bool, r : Env) @(P, @(BOOL_EXPR, x), r) -> !(x : Int, r : Env) @(P, @(INT_EXPR, x), r) -> !(x : Identifier, r : Env) @(P, @(IDEN_EXPR, x), r) -> !(o : Binop, e1 : TExpression, e2 : TExpression, r : Env) [@(P, e1, r) -> @(P, e2, r) -> @(P, @(BIN_EXPR, o, e1, e2), r)] -> !(o : Unop, e : TExpression, r : Env)
197
Imperative Language ToyC
[@(P, e, r) -> @(P, @(UN_EXPR, o, e), r)] -> !(e1 : TExpression, e2 : TExpression, r : Env) [@(P, e1, r) -> @(P, e2, r) -> @(P, @(IFT_EXPR, e1, e2), r)] -> !(e1 : TExpression, e2 : TExpression, e3 : TExpression, r : Env) [@(P, e1, r) -> @(P, e2, r) -> @(P, e3, r) -> @(P, @(IF_EXPR, e1, e2, e3), r)] -> !(e : TExpression, r : Env) @(P, e, r)];
10.2.5
Statement
10.2.5.1
Type Definition of Statement
def Statement = !(Ti : Prop) [Ti -> %-- Null --% [Identifier -> TExpression -> Ti] -> %-- Assignment --% [TExpression -> Ti -> Ti] -> %-- If-Then --% [TExpression -> Ti -> Ti -> Ti] -> %-- If-Then-Else --% [TExpression -> Ti -> Ti] -> %-- While --% [Ti -> Ti -> Ti] -> %-- Sequence --% [TExpression -> Ti] -> %-- Output --% [Identifier -> Ti] -> %-- Input --% [Identifier -> Ti -> Ti] -> %-- Label --% [Identifier -> Ti] -> %-- Goto --% Ti];
10.2.5.2
Constructors of Statement
dec MK_NULL_STAT : Statement; def MK_ASSIGNMENT = \(i : Identifier, e : TExpression) \(Ti : Prop) \(null : Ti) \(assign : [Identifier -> TExpression -> Ti]) \(ifthen : [TExpression -> Ti -> Ti]) \(ifthel : [TExpression -> Ti -> Ti -> Ti]) \(while : [TExpression -> Ti -> Ti]) \(seq : [Ti -> Ti -> Ti]) \(output : [TExpression -> Ti]) \(input : [Identifier -> Ti]) \(label : [Identifier -> Ti -> Ti]) \(goto : [Identifier -> Ti]) @(assign, i, e);
198
Imperative Language ToyC
def MK_IFTHEN = \(e : TExpression, s : Statement) \(Ti : Prop) \(null : Ti) \(assign : [Identifier -> TExpression -> Ti]) \(ifthen : [TExpression -> Ti -> Ti]) \(ifthel : [TExpression -> Ti -> Ti -> Ti]) \(while : [TExpression -> Ti -> Ti]) \(seq : [Ti -> Ti -> Ti]) \(output : [TExpression -> Ti]) \(input : [Identifier -> Ti]) \(label : [Identifier -> Ti -> Ti]) \(goto : [Identifier -> Ti]) @(ifthen, e, @(s, Ti, null, assign, ifthen, ifthel, while, seq, output, input, label, goto)); def MK_IFTHEL = \(e : TExpression, s1 : Statement, s2 : Statement) \(Ti : Prop) \(null : Ti) \(assign : [Identifier -> TExpression -> Ti]) \(ifthen : [TExpression -> Ti -> Ti]) \(ifthel : [TExpression -> Ti -> Ti -> Ti]) \(while : [TExpression -> Ti -> Ti]) \(seq : [Ti -> Ti -> Ti]) \(output : [TExpression -> Ti]) \(input : [Identifier -> Ti]) \(label : [Identifier -> Ti -> Ti]) \(goto : [Identifier -> Ti]) @(ifthel, e, @(s1, Ti, null, assign, ifthen, ifthel, while, seq, output, input, label, goto), @(s2, Ti, null, assign, ifthen, ifthel, while, seq, output, input, label, goto)); def MK_WHILE = \(e : TExpression, s : Statement)
199
Imperative Language ToyC
\(Ti : Prop) \(null : Ti) \(assign : [Identifier -> TExpression -> Ti]) \(ifthen : [TExpression -> Ti -> Ti]) \(ifthel : [TExpression -> Ti -> Ti -> Ti]) \(while : [TExpression -> Ti -> Ti]) \(seq : [Ti -> Ti -> Ti]) \(output : [TExpression -> Ti]) \(input : [Identifier -> Ti]) \(label : [Identifier -> Ti -> Ti]) \(goto : [Identifier -> Ti]) @(while, e, @(s, Ti, null, assign, ifthen, ifthel, while, seq, output, input, label, goto)); dec MK_SEQ : [Statement -> Statement -> Statement]; def MK_SEQ = \(s1 : Statement, s2 : Statement) \(Ti : Prop) \(null : Ti) \(assign : [Identifier -> TExpression -> Ti]) \(ifthen : [TExpression -> Ti -> Ti]) \(ifthel : [TExpression -> Ti -> Ti -> Ti]) \(while : [TExpression -> Ti -> Ti]) \(seq : [Ti -> Ti -> Ti]) \(output : [TExpression -> Ti]) \(input : [Identifier -> Ti]) \(label : [Identifier -> Ti -> Ti]) \(goto : [Identifier -> Ti]) @(seq, @(s1, Ti, null, assign, ifthen, ifthel, while, seq, output, input, label, goto), @(s2, Ti, null, assign, ifthen, ifthel, while, seq, output, input, label, goto)); def MK_OUTPUT = \(e : TExpression) \(Ti : Prop) \(null : Ti)
200
Imperative Language ToyC
\(assign : \(ifthen : \(ifthel : \(while : \(seq : \(output : \(input : \(label : \(goto : @(output,
[Identifier -> TExpression -> Ti]) [TExpression -> Ti -> Ti]) [TExpression -> Ti -> Ti -> Ti]) [TExpression -> Ti -> Ti]) [Ti -> Ti -> Ti]) [TExpression -> Ti]) [Identifier -> Ti]) [Identifier -> Ti -> Ti]) [Identifier -> Ti]) e);
def MK_INPUT = \(i : Identifier) \(Ti : Prop) \(null : Ti) \(assign : [Identifier -> TExpression -> Ti]) \(ifthen : [TExpression -> Ti -> Ti]) \(ifthel : [TExpression -> Ti -> Ti -> Ti]) \(while : [TExpression -> Ti -> Ti]) \(seq : [Ti -> Ti -> Ti]) \(output : [TExpression -> Ti]) \(input : [Identifier -> Ti]) \(label : [Identifier -> Ti -> Ti]) \(goto : [Identifier -> Ti]) @(input, i); def MK_LABEL = \(l : Identifier, s : Statement) \(Ti : Prop) \(null : Ti) \(assign : [Identifier -> TExpression -> Ti]) \(ifthen : [TExpression -> Ti -> Ti]) \(ifthel : [TExpression -> Ti -> Ti -> Ti]) \(while : [TExpression -> Ti -> Ti]) \(seq : [Ti -> Ti -> Ti]) \(output : [TExpression -> Ti]) \(input : [Identifier -> Ti]) \(label : [Identifier -> Ti -> Ti]) \(goto : [Identifier -> Ti]) @(label, l, @(s, Ti, null, assign, ifthen, ifthel,
201
Imperative Language ToyC
while, seq, output, input, label, goto)); def MK_GOTO = \(i : Identifier) \(Ti : Prop) \(null : Ti) \(assign : [Identifier -> TExpression -> Ti]) \(ifthen : [TExpression -> Ti -> Ti]) \(ifthel : [TExpression -> Ti -> Ti -> Ti]) \(while : [TExpression -> Ti -> Ti]) \(seq : [Ti -> Ti -> Ti]) \(output : [TExpression -> Ti]) \(input : [Identifier -> Ti]) \(label : [Identifier -> Ti -> Ti]) \(goto : [Identifier -> Ti]) @(goto, i);
10.2.5.3
Predicate Functions of Statement
def IS_ASSIGN_STAT = \(s : Statement) @(s, Bool, FF, \(i : Identifier, e : TExpression) TT, \(e : TExpression, s : Bool) FF, \(e : TExpression, s1 : Bool, s2 : Bool) FF, \(e : TExpression, s : Bool) FF, \(s1 : Bool, s2 : Bool) FF, \(e : TExpression) FF, \(i : Identifier) FF, \(l : Identifier, s : Bool) FF, \(l : Identifier) FF); def IS_IFTHEN_STAT = \(s : Statement) @(s, Bool, FF, \(i : Identifier, e : TExpression) FF, \(e : TExpression, s : Bool) TT,
202
Imperative Language ToyC
\(e : TExpression, s1 : Bool, s2 : Bool) FF, \(e : TExpression, s : Bool) FF, \(s1 : Bool, s2 : Bool) FF, \(e : TExpression) FF, \(i : Identifier) FF, \(l : Identifier, s : Bool) FF, \(l : Identifier) FF); def IS_IFTHEL_STAT = \(s : Statement) @(s, Bool, FF, \(i : Identifier, e : TExpression) FF, \(e : TExpression, s : Bool) FF, \(e : TExpression, s1 : Bool, s2 : Bool) TT, \(e : TExpression, s : Bool) FF, \(s1 : Bool, s2 : Bool) FF, \(e : TExpression) FF, \(i : Identifier) FF, \(l : Identifier, s : Bool) FF, \(l : Identifier) FF); def IS_WHILE_STAT = \(s : Statement) @(s, Bool, FF, \(i : Identifier, e : TExpression) FF, \(e : TExpression, s : Bool) FF, \(e : TExpression, s1 : Bool, s2 : Bool) FF, \(e : TExpression, s : Bool) TT, \(s1 : Bool, s2 : Bool) FF, \(e : TExpression) FF, \(i : Identifier) FF, \(l : Identifier, s : Bool) FF, \(l : Identifier) FF); def IS_SEQ_STAT = \(s : Statement) @(s, Bool, FF, \(i : Identifier, e : TExpression) FF, \(e : TExpression, s : Bool) FF, \(e : TExpression, s1 : Bool, s2 : Bool) FF, \(e : TExpression, s : Bool) FF,
203
Imperative Language ToyC
\(s1 : Bool, s2 : Bool) TT, \(e : TExpression) FF, \(i : Identifier) FF, \(l : Identifier, s : Bool) FF, \(l : Identifier) FF); def IS_OUTPUT_STAT = \(s : Statement) @(s, Bool, FF, \(i : Identifier, e : TExpression) FF, \(e : TExpression, s : Bool) FF, \(e : TExpression, s1 : Bool, s2 : Bool) FF, \(e : TExpression, s : Bool) FF, \(s1 : Bool, s2 : Bool) FF, \(e : TExpression) TT, \(i : Identifier) FF, \(l : Identifier, s : Bool) FF, \(l : Identifier) FF); def IS_INPUT_STAT = \(s : Statement) @(s, Bool, FF, \(i : Identifier, e : TExpression) FF, \(e : TExpression, s : Bool) FF, \(e : TExpression, s1 : Bool, s2 : Bool) FF, \(e : TExpression, s : Bool) FF, \(s1 : Bool, s2 : Bool) FF, \(e : TExpression) FF, \(i : Identifier) TT, \(l : Identifier, s : Bool) FF, \(l : Identifier) FF); def IS_LABEL_STAT = \(s : Statement) @(s, Bool, FF, \(i : Identifier, e : TExpression) FF, \(e : TExpression, s : Bool) FF, \(e : TExpression, s1 : Bool, s2 : Bool) FF, \(e : TExpression, s : Bool) FF, \(s1 : Bool, s2 : Bool) FF, \(e : TExpression) FF,
204
Imperative Language ToyC
\(i : Identifier) FF, \(l : Identifier, s : Bool) TT, \(l : Identifier) FF); def IS_GOTO_STAT = \(s : Statement) @(s, Bool, FF, \(i : Identifier, e : TExpression) FF, \(e : TExpression, s : Bool) FF, \(e : TExpression, s1 : Bool, s2 : Bool) FF, \(e : TExpression, s : Bool) FF, \(s1 : Bool, s2 : Bool) FF, \(e : TExpression) FF, \(i : Identifier) FF, \(l : Identifier, s : Bool) FF, \(l : Identifier) TT);
10.2.5.4
Projectors of Statement
dec ERR_STAT : Statement; def STATT = \(x : Statement, y : Statement) x; def STATF = \(x : Statement, y : Statement) y; def GET_ASSIGN_IDEN = \(s : Statement) @(s, Identifier, ERR_IDEN, \(i : Identifier, e : TExpression) i, \(e : TExpression, s : Identifier) ERR_IDEN, \(e : TExpression, s1 : Identifier, s2 : Identifier) ERR_IDEN, \(e : TExpression, s : Identifier) ERR_IDEN, \(s1 : Identifier, s2 : Identifier) ERR_IDEN, \(e : TExpression) ERR_IDEN, \(i : Identifier) ERR_IDEN, \(l : Identifier, s : Identifier) ERR_IDEN, \(l : Identifier) ERR_IDEN); def GET_ASSIGN_EXPR = \(s : Statement) @(s, TExpression, ERR_EXPR, \(i : Identifier, e : TExpression) e, \(e : TExpression, s : TExpression) ERR_EXPR,
205
Imperative Language ToyC
\(e : TExpression, s1 : TExpression, s2 : TExpression) ERR_EXPR, \(e : TExpression, s : TExpression) ERR_EXPR, \(s1 : TExpression, s2 : TExpression) ERR_EXPR, \(e : TExpression) ERR_EXPR, \(i : Identifier) ERR_EXPR, \(l : Identifier, s : TExpression) ERR_EXPR, \(l : Identifier) ERR_EXPR); def GET_IFTHEN_EXPR = \(s : Statement) @(s, TExpression, ERR_EXPR, \(i : Identifier, e : TExpression) ERR_EXPR, \(e : TExpression, s : TExpression) e, \(e : TExpression, s1 : TExpression, s2 : TExpression) ERR_EXPR, \(e : TExpression, s : TExpression) ERR_EXPR, \(s1 : TExpression, s2 : TExpression) ERR_EXPR, \(e : TExpression) ERR_EXPR, \(i : Identifier) ERR_EXPR, \(l : Identifier, s : TExpression) ERR_EXPR, \(l : Identifier) ERR_EXPR); def GET_IFTHEN_STAT = \(s : Statement) @(s, [[Statement -> Statement -> Statement] -> Statement], \(v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) @(v, @(MK_IFTHEN, e, @(s, STATT)), @(s, STATT)), \(e : TExpression, s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, v : [Statement -> Statement -> Statement])
206
Imperative Language ToyC
ERR_STAT, \(i : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, STATF); def GET_IFTHEL_EXPR = \(s : Statement) @(s, TExpression, ERR_EXPR, \(i : Identifier, e : TExpression) ERR_EXPR, \(e : TExpression, s : TExpression) ERR_EXPR, \(e : TExpression, s1 : TExpression, s2 : TExpression) e, \(e : TExpression, s : TExpression) ERR_EXPR, \(s1 : TExpression, s2 : TExpression) ERR_EXPR, \(e : TExpression) ERR_EXPR, \(i : Identifier) ERR_EXPR, \(l : Identifier, s : TExpression) ERR_EXPR, \(l : Identifier) ERR_EXPR); def GET_IFTHEL_THEN = \(s : Statement) @(s, [[Statement -> Statement -> Statement] -> Statement], \(v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) @(v, @(MK_IFTHEL, e, @(s1, STATT), @(s2, STATT)), @(s1, STATT)), \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(s1 : [[Statement -> Statement -> Statement] -> Statement],
207
Imperative Language ToyC
s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, STATF); def GET_IFTHEL_ELSE = \(s : Statement) @(s, [[Statement -> Statement -> Statement] -> Statement], \(v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) @(v, @(MK_IFTHEL, e, @(s1, STATT), @(s2, STATT)), @(s2, STATT)), \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier,
208
Imperative Language ToyC
s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, STATF); def GET_WHILE_EXPR = \(s : Statement) @(s, TExpression, ERR_EXPR, \(i : Identifier, e : TExpression) ERR_EXPR, \(e : TExpression, s : TExpression) ERR_EXPR, \(e : TExpression, s1 : TExpression, s2 : TExpression) ERR_EXPR, \(e : TExpression, s : TExpression) e, \(s1 : TExpression, s2 : TExpression) ERR_EXPR, \(e : TExpression) ERR_EXPR, \(i : Identifier) ERR_EXPR, \(l : Identifier, s : TExpression) ERR_EXPR, \(l : Identifier) ERR_EXPR); def GET_WHILE_STAT = \(s : Statement) @(s, [[Statement -> Statement -> Statement] -> Statement], \(v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) @(v, @(MK_WHILE, e, @(s, STATT)), @(s, STATT)), \(s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, v : [Statement -> Statement -> Statement])
209
Imperative Language ToyC
ERR_STAT, \(i : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, STATF); def GET_SEQ_HEAD = \(s : Statement) @(s, [[Statement -> Statement -> Statement] -> Statement], \(v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) @(v, @(MK_SEQ, @(s1, STATT), @(s2, STATT)), @(s1, STATT)), \(e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, v : [Statement -> Statement -> Statement])
210
Imperative Language ToyC
ERR_STAT, STATF); def GET_SEQ_TAIL = \(s : Statement) @(s, [[Statement -> Statement -> Statement] -> Statement], \(v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) @(v, @(MK_SEQ, @(s1, STATT), @(s2, STATT)), @(s2, STATT)), \(e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, STATF); def GET_OUTPUT_EXPR = \(s : Statement) @(s, TExpression, ERR_EXPR, \(i : Identifier, e : TExpression) ERR_EXPR, \(e : TExpression, s : TExpression) ERR_EXPR,
211
Imperative Language ToyC
\(e : TExpression, s1 : TExpression, s2 : TExpression) ERR_EXPR, \(e : TExpression, s : TExpression) ERR_EXPR, \(s1 : TExpression, s2 : TExpression) ERR_EXPR, \(e : TExpression) e, \(i : Identifier) ERR_EXPR, \(l : Identifier, s : TExpression) ERR_EXPR, \(l : Identifier) ERR_EXPR); def GET_INPUT_IDEN = \(s : Statement) @(s, Identifier, ERR_IDEN, \(i : Identifier, e : TExpression) ERR_IDEN, \(e : TExpression, s : Identifier) ERR_IDEN, \(e : TExpression, s1 : Identifier, s2 : Identifier) ERR_IDEN, \(e : TExpression, s : Identifier) ERR_IDEN, \(s1 : Identifier, s2 : Identifier) ERR_IDEN, \(e : TExpression) ERR_IDEN, \(i : Identifier) i, \(l : Identifier, s : Identifier) ERR_IDEN, \(l : Identifier) ERR_IDEN); def GET_LABEL_IDEN = \(s : Statement) @(s, Identifier, ERR_IDEN, \(i : Identifier, e : TExpression) ERR_IDEN, \(e : TExpression, s : Identifier) ERR_IDEN, \(e : TExpression, s1 : Identifier, s2 : Identifier) ERR_IDEN, \(e : TExpression, s : Identifier) ERR_IDEN, \(s1 : Identifier, s2 : Identifier) ERR_IDEN, \(e : TExpression) ERR_IDEN, \(i : Identifier) ERR_IDEN, \(l : Identifier, s : Identifier) l, \(l : Identifier) ERR_IDEN); def GET_LABEL_STAT = \(s : Statement) @(s, [[Statement -> Statement -> Statement] -> Statement], \(v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT,
212
Imperative Language ToyC
\(e : TExpression, s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(s1 : [[Statement -> Statement -> Statement] -> Statement], s2 : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) ERR_STAT, \(e : TExpression, v : [Statement -> Statement -> Statement]) ERR_STAT, \(i : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, \(l : Identifier, s : [[Statement -> Statement -> Statement] -> Statement], v : [Statement -> Statement -> Statement]) @(v, @(MK_LABEL, l, @(s, STATT)), @(s, STATT)), \(l : Identifier, v : [Statement -> Statement -> Statement]) ERR_STAT, STATF); def GET_GOTO_IDEN = \(s : Statement) @(s, Identifier, ERR_IDEN, \(i : Identifier, e : TExpression) ERR_IDEN, \(e : TExpression, s : Identifier) ERR_IDEN, \(e : TExpression, s1 : Identifier, s2 : Identifier) ERR_IDEN, \(e : TExpression, s : Identifier) ERR_IDEN, \(s1 : Identifier, s2 : Identifier) ERR_IDEN, \(e : TExpression) ERR_IDEN, \(i : Identifier) ERR_IDEN, \(l : Identifier, s : Identifier) ERR_IDEN, \(l : Identifier) l);
10.2.5.5
Induction Rule of Statement
dec IndStat : !(P : [Statement -> Env -> Type(0)]) [!(i : Identifier, e : TExpression, r : Env) @(P, @(MK_ASSIGNMENT, i, e), r) ->
213
Imperative Language ToyC
!(e : TExpression, s : Statement, r : Env) [@(P, s, r) -> @(P, @(MK_IFTHEN, e, s), r)] -> !(e : TExpression, s1 : Statement, s2 : Statement, r : Env) [@(P, s1, r) -> @(P, s2, r) -> @(P, @(MK_IFTHEL, e, s1, s2), r)] -> !(e : TExpression, s : Statement, r : Env) [@(P, s, r) -> @(P, @(MK_WHILE, e, s), r)] -> !(s1 : Statement, s2 : Statement, r : Env) [@(P, s1, r) -> @(P, s2, r) -> @(P, @(MK_SEQ, s1, s2), r)] -> !(e : TExpression, r : Env) @(P, @(MK_OUTPUT, e), r) -> !(i : Identifier, r : Env) @(P, @(MK_INPUT, i), r) -> !(l : Identifier, s : Statement, r : Env) [@(P, s, r) -> @(P, @(MK_LABEL, l, s), r)] -> !(l : Identifier, r : Env) @(P, @(MK_GOTO, l), r) -> !(s : Statement, r : Env) @(P, s, r)];
10.2.6
Program
10.2.6.1
Type Definition of Program
def Program = !(T : Prop) [[Identifier -> Declaration -> Statement -> T] -> T];
10.2.6.2
Constructors of Program
def PROG = \(i : Identifier, d : Declaration, s: Statement) \(T : Prop) \(p : [Identifier -> Declaration -> Statement -> T]) @(p, i, d, s);
10.2.6.3
Projectors of Program
def GET_PROG_DECL = \(p : Program) @(p, Declaration, \(i : Identifier, d : Declaration, s : Statement) d); def GET_PROG_STAT =
214
Imperative Language ToyC
\(p : Program) @(p, Statement, \(i : Identifier, d : Declaration, s : Statement) s);
10.2.6.4
Induction Rule of Program
dec IndProg : !(P : [Program -> Type(0)], Q : [Declaration -> Env -> Type(0)], R : [Statement -> Env -> Type(0)]) !(i : Identifier, d : Declaration, s : Statement, r : Env) [@(Q, d, r) -> @(R, s, r) -> @(P, @(PROG, i, d, s))]; dec Ind : !(P : [Program -> Type(0)], Q : [Declaration -> Env -> Type(0)], R : [Statement -> Env -> Type(0)]) [!(i : Identifier, d : Declaration, s : Statement, r : Env) @(P, @(PROG, i, d, s)) -> !(p : Program) @(P, p)];
10.3
Static Semantics of ToyC
10.3.1
Type Information
10.3.1.1
Definition of Type Info
dec TypeInfoList : Prop; def Type_Info = !(T : Prop) [T -> T -> T -> T -> [TypeInfoList -> T] -> T]; def TypeInfoList = @(List, Type_Info);
10.3.1.2
Constructors of Type Info
def ERR_TYPED = \(T : Prop) \(x : T, y : T, u : T, v : T, w : [TypeInfoList -> T])
215
Imperative Language ToyC
x; def WEL_TYPED = \(T : Prop) \(x : T, y : T, u : T, v : T, w : [TypeInfoList -> T]) y; def BOO_TYPED = \(T : Prop) \(x : T, y : T, u : T, v : T, w : [TypeInfoList -> T]) u; def INT_TYPED = \(T : Prop) \(x : T, y : T, u : T, v : T, w : [TypeInfoList -> T]) v; def LST_TYPED = \(l : TypeInfoList) \(T : Prop) \(x : T, y : T, u : T, v : T, w : [TypeInfoList -> T]) @(w, l);
10.3.1.3
Predicate Functions of Type Info
dec IS_LST_TYPED : [Type_Info -> Bool]; dec IS_WEL_TYPED : [Type_Info -> Bool];
10.3.1.4
Projectors of Type Info
dec GET_LST_TYPED : [Type_Info -> TypeInfoList];
10.3.2
Type Environment
10.3.2.1
Definition of TEnv
def TEnv = [Identifier -> Type_Info]; dec TENV_EQ : [TEnv -> TEnv -> Bool]; dec IDEN_EQ : [Identifier -> Identifier -> Bool];
216
Imperative Language ToyC
dec TYPE_INFO_EQ : [Type_Info -> Type_Info -> Bool];
10.3.2.2
Constructors of TEnv
dec EMPTY_TENV : TEnv; def ENTER_TENV = \(r : TEnv, i : Identifier, t : Type_Info) \(j : Identifier) @(GIF_THEN_ELSE, Type_Info, @(TENV_EQ, r, EMPTY_TENV), ERR_TYPED, @(GIF_THEN_ELSE, Type_Info, @(IDEN_EQ, i, j), t, @(r, j))); dec COMP_TENV : [TEnv -> TEnv -> TEnv];
10.3.2.3
Predicate Functions of TEnv
def IS_IN_TENV = \(r : TEnv, i : Identifier) let t = @(r, i) in @(GIF_THEN_ELSE, Bool, @(TYPE_INFO_EQ, t, ERR_TYPED), FF, TT);
10.3.3
Static Semantics for Declaration
dec IS_LOG_BINOP : [Binop -> Binop -> Bool]; dec IS_NUM_BINOP : [Binop -> Binop -> Bool]; dec TDECL : [Declaration -> TEnv -> TEnv]; dec TEXPR : [TExpression -> TEnv -> Type_Info];
217
Imperative Language ToyC
dec TEXPR_LIST : [TExpr_list -> TEnv -> TypeInfoList]; dec TSTAT : [Statement -> TEnv -> Type_Info]; def TDECL = \(d : Declaration, r : TEnv) @(d, TEnv, r, \(i : Identifier, t : Type_Elem) @(t, TEnv, @(ENTER_TENV, r, i, BOO_TYPED), @(ENTER_TENV, r, i, INT_TYPED)), \(x1 : TEnv, x2 : TEnv) let d1 = @(LEFT_DECL, d), d2 = @(RIGHT_DECL, d) in let r1 = @(TDECL, d1, r) in @(TDECL, d2, r1));
10.3.4
Static Semantics for Expression
def TEXPR = \(e : TExpression, r : TEnv) @(e, Type_Info, ERR_TYPED, \(b : Bool) BOO_TYPED, \(i : Int) INT_TYPED, \(i : Identifier) @(r, i), \(o : Binop, x1 : Type_Info, x2 : Type_Info) let e1 = @(GET_EXPR_LEFT, e), e2 = @(GET_EXPR_RIGHT, e) in let te1 = @(TEXPR, e1, r), te2 = @(TEXPR, e2, r) in @(GIF_THEN_ELSE, Type_Info, @(TYPE_INFO_EQ, te1, te2), @(te1, Type_Info, ERR_TYPED, ERR_TYPED, @(o, Type_Info, ERR_TYPED,
218
Imperative Language ToyC
ERR_TYPED, ERR_TYPED, ERR_TYPED, ERR_TYPED, BOO_TYPED, BOO_TYPED, ERR_TYPED, ERR_TYPED, ERR_TYPED, ERR_TYPED, BOO_TYPED, BOO_TYPED), @(o, Type_Info, INT_TYPED, INT_TYPED, INT_TYPED, INT_TYPED, INT_TYPED, BOO_TYPED, BOO_TYPED, BOO_TYPED, BOO_TYPED, BOO_TYPED, BOO_TYPED, ERR_TYPED, ERR_TYPED), \(x : TypeInfoList) ERR_TYPED), ERR_TYPED), \(o : Unop, x : Type_Info) let ne = @(GET_EXPR_UN, e) in let te = @(TEXPR, ne, r) in @(te, Type_Info, ERR_TYPED, ERR_TYPED, @(o, Type_Info, BOO_TYPED, ERR_TYPED), @(o, Type_Info, ERR_TYPED, INT_TYPED), \(x : TypeInfoList) ERR_TYPED), \(l : TExpr_list) let tvl = @(TEXPR_LIST, l, r) in
219
Imperative Language ToyC
@(LST_TYPED, tvl), \(pos : Type_Info, n : Nat) let pe = @(GET_EXPR_POS, e) in let tv = @(TEXPR, pe, r) in @(GIF_THEN_ELSE, Type_Info, @(IS_LST_TYPED, tv), let tvl = @(GET_LST_TYPED, tv) in let ltvl = @(LENGTH, Type_Info, tvl) in @(GIF_THEN_ELSE, Type_Info, @(OR, @(EQUAL, n, ltvl), @(LESS, n, ltvl)), @(Nth, Type_Info, n, tvl, ERR_TYPED), ERR_TYPED), ERR_TYPED), \(x1 : Type_Info, x2 : Type_Info) let e1 = @(GET_EXPR_IFT, e), e2 = @(GET_EXPR_THT, e) in let te1 = @(TEXPR, e1, r), te2 = @(TEXPR, e2, r) in @(GIF_THEN_ELSE, Type_Info, @(TYPE_INFO_EQ, te1, BOO_TYPED), te2, ERR_TYPED), \(x1 : Type_Info, x2 : Type_Info, x3 : Type_Info) let e1 = @(GET_EXPR_IF, e), e2 = @(GET_EXPR_TH, e), e3 = @(GET_EXPR_EL, e) in let te1 = @(TEXPR, e1, r), te2 = @(TEXPR, e2, r), te3 = @(TEXPR, e3, r) in @(GIF_THEN_ELSE, Type_Info, @(TYPE_INFO_EQ, te1, BOO_TYPED), @(GIF_THEN_ELSE, Type_Info, @(TYPE_INFO_EQ, te2, te3), te2, ERR_TYPED), ERR_TYPED));
10.3.5
Static Semantics for Statement
def TSTAT = \(s : Statement, r : TEnv)
220
Imperative Language ToyC
@(s, Type_Info, WEL_TYPED, \(i : Identifier, e : TExpression) let it = @(r, i), et = @(TEXPR, e, r) in @(GIF_THEN_ELSE, Type_Info, @(TYPE_INFO_EQ, it, et), @(it, Type_Info, ERR_TYPED, ERR_TYPED, WEL_TYPED, WEL_TYPED, \(x : TypeInfoList) ERR_TYPED), ERR_TYPED), \(e : TExpression, x : Type_Info) let b = @(GET_IFTHEN_STAT, s) in let et = @(TEXPR, e, r) in @(GIF_THEN_ELSE, Type_Info, @(TYPE_INFO_EQ, et, BOO_TYPED), @(TSTAT, b, r), ERR_TYPED), \(e : TExpression, x1 : Type_Info, x2 : Type_Info) let b1 = @(GET_IFTHEL_THEN, s), b2 = @(GET_IFTHEL_ELSE, s) in let et = @(TEXPR, e, r) in @(GIF_THEN_ELSE, Type_Info, @(TYPE_INFO_EQ, et, BOO_TYPED), let bt1 = @(TSTAT, b1, r), bt2 = @(TSTAT, b2, r) in @(GIF_THEN_ELSE, Type_Info, @(AND, @(TYPE_INFO_EQ, bt1, WEL_TYPED), @(TYPE_INFO_EQ, bt2, WEL_TYPED)), WEL_TYPED, ERR_TYPED), ERR_TYPED), \(e : TExpression, x : Type_Info) let b = @(GET_WHILE_STAT, s) in let et = @(TEXPR, e, r) in @(GIF_THEN_ELSE, Type_Info,
221
Imperative Language ToyC
@(TYPE_INFO_EQ, et, BOO_TYPED), @(TSTAT, b, r), ERR_TYPED), \(x1 : Type_Info, x2 : Type_Info) let b1 = @(GET_SEQ_HEAD, s), b2 = @(GET_SEQ_TAIL, s) in let bt1 = @(TSTAT, b1, r) in @(GIF_THEN_ELSE, Type_Info, @(TYPE_INFO_EQ, bt1, WEL_TYPED), @(TSTAT, b2, r), ERR_TYPED), \(e : TExpression) let et = @(TEXPR, e, r) in @(GIF_THEN_ELSE, Type_Info, @(AND, @(TYPE_INFO_EQ, et, BOO_TYPED), @(TYPE_INFO_EQ, et, INT_TYPED)), WEL_TYPED, ERR_TYPED), \(i : Identifier) let it = @(r, i) in @(GIF_THEN_ELSE, Type_Info, @(AND, @(TYPE_INFO_EQ, it, BOO_TYPED), @(TYPE_INFO_EQ, it, INT_TYPED)), WEL_TYPED, ERR_TYPED), \(l : Identifier, x : Type_Info) let b = @(GET_IFTHEN_STAT, s) in @(TSTAT, b, r), \(l : Identifier) @(r, l));
10.3.6
Static Semantics for Jump
dec TJUMP : [Statement -> TEnv -> TEnv]; def TJUMP = \(s : Statement, r : TEnv) @(s, TEnv, %-- Null Statement --%
222
Imperative Language ToyC
EMPTY_TENV, %-- Assignment --% \(i : Identifier, e : TExpression) EMPTY_TENV, %-- If-Then --% \(e : TExpression, x : TEnv) let b = @(GET_IFTHEN_STAT, s) in @(TJUMP, b, r), %-- If-Then-Else --% \(e : TExpression, x1 : TEnv, x2 : TEnv) let b1 = @(GET_IFTHEL_THEN, s), b2 = @(GET_IFTHEL_ELSE, s) in @(COMP_TENV, @(TJUMP, b1, r), @(TJUMP, b2, r)), %-- While-Do --% \(e : TExpression, x : TEnv) let b = @(GET_WHILE_STAT, s) in @(TJUMP, b, r), %-- Sequance --% \(x1 : TEnv, x2 : TEnv) let b1 = @(GET_SEQ_HEAD, s), b2 = @(GET_SEQ_TAIL, s) in @(COMP_TENV, @(TJUMP, b1, r), @(TJUMP, b2, r)), %-- Output --% \(e : TExpression) EMPTY_TENV, %-- Input --% \(i : Identifier) EMPTY_TENV, %-- Label --% \(l : Identifier, x : TEnv) let b = @(GET_LABEL_STAT, s) in @(COMP_TENV, @(TJUMP, b, r), @(ENTER_TENV, EMPTY_TENV, l, WEL_TYPED)), %-- Goto --% \(l : Identifier) EMPTY_TENV);
10.3.7
Static Semantics for Program
def TPROG = \(p : Program) @(p, Type_Info, \(i : Identifier, d : Declaration, s : Statement) let r1 = @(TDECL, d, EMPTY_TENV), r2 = @(TJUMP, s, r1) in @(TSTAT, s, r2));
223
Imperative Language ToyC
10.4
Utilities for Dynamic Semantics
10.4.1
Environment
dec Identifier : Prop; def DVal = @(Sum2, Location, Cont); dec ERR_DVAL : DVal; dec EQ_DVAL : [DVal -> DVal -> Bool];
10.4.1.1
Type Definition of Env
def Env = [Identifier -> DVal]; dec ENV_EQ : [Env -> Env -> Bool]; dec IDEN_EQ : [Identifier -> Identifier -> Bool]; dec LOC_EQ : [Location -> Location -> Bool];
10.4.1.2
Constructors of Env
dec EMPTY_ENV : Env; def ENTER_ENV = \(r : Env, i : Identifier, t : DVal) \(j : Identifier) @(GIF_THEN_ELSE, DVal, @(ENV_EQ, r, EMPTY_ENV), ERR_DVAL, @(GIF_THEN_ELSE, DVal, @(IDEN_EQ, i, j), t, @(r, j))); def ENTER_LOC_ENV = \(r : Env, i : Identifier, t : Location) \(j : Identifier) @(GIF_THEN_ELSE,
224
Imperative Language ToyC
DVal, @(ENV_EQ, r, EMPTY_ENV), @(INJ1, Location, Cont, ERR_LOC), @(GIF_THEN_ELSE, DVal, @(IDEN_EQ, i, j), @(INJ1, Location, Cont, t), @(r, j))); def ENTER_CON_ENV = \(r : Env, i : Identifier, t : Cont) \(j : Identifier) @(GIF_THEN_ELSE, DVal, @(ENV_EQ, r, EMPTY_ENV), @(INJ2, Location, Cont, ERR_CONT), @(GIF_THEN_ELSE, DVal, @(IDEN_EQ, i, j), @(INJ2, Location, Cont, t), @(r, j))); def COMP_ENV = \(r1 : Env, r2 : Env) \(i : Identifier) let l = @(r2, i) in @(GIF_THEN_ELSE, DVal, @(EQ_DVAL, l, ERR_DVAL), @(r2, i), @(r1, i));
10.4.1.3
Predicate Functions of Env
def IS_IN_ENV = \(r : Env, i : Identifier) let t = @(r, i) in @(GIF_THEN_ELSE, Bool, @(EQ_DVAL, t, ERR_DVAL), FF, TT);
225
Imperative Language ToyC
10.4.2
State
10.4.2.1
Type Definition of Value
dec Value_list : Prop; def Value = !(T : Prop) [[Bool -> T] -> [Int -> T] -> [Value_list -> T] -> T]; def Value_list = @(List, Value); dec ERR_VLST : Value_list;
10.4.2.2
Constructors of Value
def BVAL = \(b : Bool) \(T : Prop) \(x : [Bool -> T], y : [Int -> T], z : [Value_list -> T]) @(x, b); def IVAL = \(i : Int) \(T : Prop) \(x : [Bool -> T], y : [Int -> T], z : [Value_list -> T]) @(y, i); def LVAL = \(l : Value_list) \(T : Prop) \(x : [Bool -> T], y : [Int -> T], z : [Value_list -> T]) @(z, l); dec ERR_VAL : Value; dec IS_ERR_VAL : [Value -> Bool]; dec EQ_VAL : [Value -> Value -> Bool];
10.4.2.3
Predicate Functions of Value
def IS_BVAL = \(v : Value)
226
Imperative Language ToyC
@(v, Bool, \(x : Bool) TT, \(y : Int) FF, \(z : Value_list) FF); def IS_IVAL = \(v : Value) @(v, Bool, \(x : Bool) FF, \(y : Int) TT, \(z : Value_list) FF); def IS_LVAL = \(v : Value) @(v, Bool, \(x : Bool) FF, \(y : Int) FF, \(z : Value_list) TT);
10.4.2.4
Projectors of Value
def GET_BVAL = \(v : Value) @(v, Bool, \(b : Bool) b, \(i : Int) ERR_BOOL, \(l : Value_list) ERR_BOOL); def GET_IVAL = \(v : Value) @(v, Int, \(b : Bool) ERR_INT, \(i : Int) i, \(l : Value_list) ERR_INT); def GET_LVAL = \(v : Value) @(v, Value_list, \(b : Bool) ERR_VLST, \(i : Int) ERR_VLST, \(l : Value_list) l);
10.4.2.5
Type Definition of Memory
def Memory = !(T : Prop) [T -> [Location -> Value -> T -> T] -> T];
227
Imperative Language ToyC
10.4.2.6
Constructors of Memory
def EMPTY_MEM = \(T : Prop) \(x : T, y : [Location -> Value -> T -> T]) x; def ENTER_MEM = \(m : Memory, l : Location, v : Value) \(T : Prop) \(x : T, y : [Location -> Value -> T -> T]) @(y, l, v, @(m, T, x, y)); dec ERR_MEM : Memory;
10.4.2.7
Projectors of Memory
def TL = \(x : Memory, y : Memory) x; def FL = \(x : Memory, y : Memory) y; def GET_VAL = \(s : Memory) @(s, Value, ERR_VAL, \(u : Location, v : Value, n : Value) v); def GET_LOC = \(s : Memory) @(s, Location, ERR_LOC, \(u : Location, v : Value, n : Location) u); def GET_MEM = \(n : Memory) @(n, [[Memory -> Memory -> Memory] -> Memory], \(u : [Memory -> Memory -> Memory]) ERR_MEM, \(e : Location, d : Value, p : [[Memory -> Memory -> Memory] -> Memory], v : [Memory -> Memory -> Memory]) @(v, @(ENTER_MEM, @(p, TL), e, d), @(p, TL)), FL); dec FETCH : [Memory -> Location -> Value]; def FETCH = \(m : Memory, l : Location) let i = @(GET_LOC, m), v = @(GET_VAL, m), n = @(GET_MEM, m) in
228
Imperative Language ToyC
@(GIF_THEN_ELSE, Value, @(LOC_EQ, i, l), v, @(FETCH, n, l));
10.4.2.8
Type Definition of Input
def Input = @(List, Value);
10.4.2.9
Constructors of Input
def EMPTY_INPUT = @(NIL, Value); def ENTER_INPUT = @(CONS, Value);
10.4.2.10
Projectors of Input
def HEAD_INPUT = @(HEAD, Value); def REST_INPUT = @(TAIL, Value);
10.4.2.11
Type Definition of Output
def Output = @(List, Value);
10.4.2.12
Constructors of Output
def EMPTY_OUTPUT = @(NIL, Value); def ENTER_OUTPUT = @(CONS, Value);
10.4.2.13
Type Definition of State
def State = @(Product3, Memory, Output, Input);
229
Imperative Language ToyC
10.4.2.14
Constructors of State
def STATE = @(PRODUCT3, Memory, Output, Input); dec ERR_STATE : State;
10.4.2.15
Projectors of State
def GET_MEMORY = @(PROJ1, Memory, Output, Input); def GET_OUTPUT = @(PROJ2, Memory, Output, Input); def GET_INPUT = @(PROJ3, Memory, Output, Input); %-- Changing State --% def DONOTHING = \(s : State) s; def CHANGE = \(s : State, l : Location, v : Value) let m = @(GET_MEMORY, s), o = @(GET_OUTPUT, s), i = @(GET_INPUT, s) in @(STATE, @(ENTER_MEM, m, l, v), o, i); def PRINT = \(s : State, v : Value) let m = @(GET_MEMORY, s), o = @(GET_OUTPUT, s), i = @(GET_INPUT, s) in @(STATE, m, @(ENTER_OUTPUT, v, o), i); def READ = \(s : State, l : Location) let m = @(GET_MEMORY, s), o = @(GET_OUTPUT, s), i = @(GET_INPUT, s) in let ni = @(REST_INPUT, i), nm = @(ENTER_MEM, m, l, @(HEAD_INPUT, i)) in @(STATE, nm, o, ni); def GET = \(s : State, l : Location) let m = @(GET_MEMORY, s) in
230
Imperative Language ToyC
@(FETCH, m, l);
10.4.2.16
Continuation
def Cont = [State -> State]; def Dont = [Env -> Cont]; def Eont = [Value -> Cont]; dec ERR_CONT : Cont;
10.5
Dynamic Semantics of ToyC
10.5.1
Operations for Binop and Unop
dec OPERATOR : [Binop -> Value -> Value -> Value]; dec UNOPERATOR : [Unop -> Value -> Value];
10.5.2
Semantics of Declaration
dec DECL : [Declaration -> Env -> Dont -> Cont]; def DECL = \(d : Declaration, r : Env, c : Dont) @(d, Cont, @(c, r), \(i : Identifier, t : Type_Elem) @(GIF_THEN_ELSE, Cont, @(IS_IN_ENV, r, i), @(c, r), let l = NEW_LOC in @(t, Cont, @(c, @(ENTER_LOC_ENV, r, i, l)), @(c, @(ENTER_LOC_ENV, r, i, l)))), \(x1 : Cont, x2 : Cont) let d1 = @(LEFT_DECL, d),
231
Imperative Language ToyC
d2 = @(RIGHT_DECL, d) in @(DECL, d1, r, \(r1 : Env) @(DECL, d2, @(COMP_ENV, r, r1), \(r2 : Env) @(c, @(COMP_ENV, r1, r2)))));
10.5.3
Semantics for TExpression
dec EXPR : [TExpression -> Env -> Eont -> Cont]; dec EXPR_LIST : [TExpr_list -> Env -> Eont -> Cont]; def EXPR_LIST = \(el : TExpr_list, r : Env, k : Eont) @(GIF_THEN_ELSE, Cont, @(IS_EMPTY, TExpression, el), @(k, @(LVAL, @(NIL, Value))), let hel = @(HEAD, TExpression, el), tel = @(TAIL, TExpression, el) in @(EXPR, hel, r, \(v1 : Value) @(EXPR_LIST, tel, r, \(v2 : Value) let vl = @(GET_LVAL, v2) in @(k, @(LVAL, @(CONS, Value, v1, vl)))))); def EXPR = \(e : TExpression, r : Env, k : Eont) @(e, Cont, @(k, ERR_VAL), \(b : Bool) @(k, @(BVAL, b)), \(i : Int) @(k, @(IVAL, i)), \(i : Identifier, s : State) let cc = @(r, i) in @(WHEN, Location, Cont, State, cc,
232
Imperative Language ToyC
\(a1 : Location) @(k, @(GET, s, a1), s), \(a2 : Cont) @(k, ERR_VAL, s)), \(o : Binop, x1 : Cont, x2 : Cont) let e1 = @(GET_EXPR_LEFT, e), e2 = @(GET_EXPR_RIGHT, e) in @(EXPR, e1, r, \(v1 : Value) @(EXPR, e2, r, \(v2 : Value) @(k, @(OPERATOR, o, v1, v2)))), \(o : Unop, x : Cont) let ne = @(GET_EXPR_UN, e) in @(EXPR, ne, r, \(v : Value) @(k, @(UNOPERATOR, o, v))), \(l : TExpr_list) @(EXPR_LIST, l, r, k), \(x : Cont, n : Nat) let pe = @(GET_EXPR_POS, e) in @(EXPR, pe, r, \(v : Value) @(GIF_THEN_ELSE, Cont, @(IS_LVAL, v), let vl = @(GET_LVAL, v) in @(k, @(Nth, Value, n, vl, ERR_VAL)), @(k, ERR_VAL))), \(x1 : Cont, x2 : Cont) let e1 = @(GET_EXPR_IFT, e), e2 = @(GET_EXPR_THT, e) in @(EXPR, e, r, \(v : Value) @(GIF_THEN_ELSE, Cont, @(GET_BVAL, v), @(EXPR, e2, r, k), @(k, ERR_VAL))), \(x1 : Cont, x2 : Cont, x3 : Cont) let e1 = @(GET_EXPR_IF, e), e2 = @(GET_EXPR_TH, e), e3 = @(GET_EXPR_EL, e) in @(EXPR, e, r, \(v : Value) @(GIF_THEN_ELSE, Cont, @(GET_BVAL, v), @(EXPR, e2, r, k),
233
Imperative Language ToyC
@(EXPR, e3, r, k))));
10.5.4
Semantics for Statement
dec STAT : [Statement -> Env -> Cont -> Cont]; def STAT = \(s : Statement, r : Env, c : Cont) @(s, Cont, c, \(i : Identifier, e : TExpression) let cc = @(r, i) in @(WHEN, Location, Cont, Cont, cc, \(a1 : Location) @(EXPR, e, r, \(v : Value, z : State) @(c, @(CHANGE, z, a1, v))), \(a2 : Cont) ERR_CONT), \(e : TExpression, x : Cont) let b = @(GET_IFTHEN_STAT, s) in @(EXPR, e, r, \(v : Value) @(GIF_THEN_ELSE, Cont, @(GET_BVAL, v), @(STAT, b, r, c), ERR_CONT)), \(e : TExpression, x1 : Cont, x2 : Cont) let b1 = @(GET_IFTHEL_THEN, s), b2 = @(GET_IFTHEL_ELSE, s) in @(EXPR, e, r, \(v : Value) @(GIF_THEN_ELSE, Cont, @(GET_BVAL, v),
234
Imperative Language ToyC
@(STAT, b1, r, c), @(STAT, b2, r, c))), \(e : TExpression, x : Cont) let b = @(GET_WHILE_STAT, s) in @(EXPR, e, r, \(v : Value) @(GIF_THEN_ELSE, Cont, @(GET_BVAL, v), @(STAT, @(MK_SEQ, b, s), r, c), c)), \(x1 : Cont, x2 : Cont) let b1 = @(GET_SEQ_HEAD, s), b2 = @(GET_SEQ_TAIL, s) in @(STAT, b1, r, @(STAT, b2, r, c)), \(e : TExpression) @(EXPR, e, r, \(v : Value, z : State) @(PRINT, z, v)), \(i : Identifier) \(z : State) let cc = @(r, i) in @(WHEN, Location, Cont, State, cc, \(a1 : Location) @(c, @(READ, z, a1)), \(a2 : Cont) ERR_STATE), %-- Label Statement --% \(l : Identifier, x : Cont) let b = @(GET_LABEL_STAT, s) in @(STAT, b, r, c), %-- Goto Statement --% \(l : Identifier) let cc = @(r, l) in @(WHEN, Location, Cont, Cont, cc, \(a1 : Location) ERR_CONT, \(a2 : Cont) a2));
235
Imperative Language ToyC
10.5.5
Semantics for Jump
dec JUMP : [Statement -> Env -> Cont -> Env]; def JUMP = \(s : Statement, r : Env, c : Cont) @(s, Env, %-- Null Statement --% EMPTY_ENV, %-- Assignment --% \(i : Identifier, e : TExpression) EMPTY_ENV, %-- If-Then --% \(e : TExpression, x : Env) let b = @(GET_IFTHEN_STAT, s) in @(JUMP, b, r, c), %-- If-Then-Else --% \(e : TExpression, x1 : Env, x2 : Env) let b1 = @(GET_IFTHEL_THEN, s), b2 = @(GET_IFTHEL_ELSE, s) in @(COMP_ENV, @(JUMP, b1, r, c), @(JUMP, b2, r, c)), %-- While-Do --% \(e : TExpression, x : Env) let b = @(GET_WHILE_STAT, s) in @(JUMP, b, r, @(STAT, s, r, c)), %-- Sequance --% \(x1 : Env, x2 : Env) let b1 = @(GET_SEQ_HEAD, s), b2 = @(GET_SEQ_TAIL, s) in @(COMP_ENV, @(JUMP, b1, r, @(STAT, b2, r, c)), @(JUMP, b2, r, c)), %-- Output --% \(e : TExpression) EMPTY_ENV, %-- Input --% \(i : Identifier) EMPTY_ENV, %-- Label --% \(l : Identifier, x : Env) let b = @(GET_LABEL_STAT, s) in @(COMP_ENV, @(JUMP, b, r, c), @(ENTER_CON_ENV, EMPTY_ENV, l, @(STAT, b, r, c))), %-- Goto --% \(l : Identifier) EMPTY_ENV);
236
Imperative Language ToyC
10.5.6
Semantics for Toy Program
def TOY_SEMAN = \(p : Program) let t = @(TPROG, p) in @(GIF_THEN_ELSE, Cont, @(TYPE_INFO_EQ, t, WEL_TYPED), @(p, Cont, \(i : Identifier, d : Declaration, s : Statement) @(DECL, d, EMPTY_ENV, \(r1 : Env) lec r2 : Env in let r2 = @(JUMP, s, @(COMP_ENV, r1, r2), \(z : State) z) in @(STAT, s, r2, \(z : State) z))), \(z : State) z)
237
Chapter 11
The SCADE/LUSTRE Compiler - Static Analysis We now describe the main techniques used in LUSTRE compiler which was taken from the work of Nicholas Halbwachs, Paul Caspi, Pascal Raymond and Daniel Pilaud [16].
11.1
Static Verification
Static well-formedness checking is clearly an important issue within the framework of reliable programming, and aims at avoiding the overhead of dynamic checks at run time. • Definition checking. Any local and output variables should have one and only one equational definition. • Absence of recursive node call. In view of obtaining automata-like executable programs, LUSTRE allows up to now only static networks to be described. The problem of structuring recursive calls so that the above property is maintained, has not yet been investigated. • Clock consistency checking - clock calculus. • Absence of uninitialized expressions (yielding nil values). Such expressions are accepted as far as these do not concern clocks, outputs, and assertions. • Absence of cyclic definitions. Any cycle in the network should contain at least one pre operator. An equation such that X = 3 * X + 1 has a meaning which is the least solution with respect to the prefix ordering of sequences; in this case, the solution for X is the empty sequence, and 238
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS239
it can be interpreted as a deadlock. It is therefore rejected. Note that LUSTRE also rejects structural deadlocks which are not true ones, such that X = if C then Y else Z; Y = if C then Z else X;
The reason is that the analysis of such networks is undecidable, in general.
11.2
Definition Checking
This is to check any local and output variables should have one and only one equational definition.
11.2.1
Checking of Variable Definition in Equations
11.2.1.1
Checking of Variable Defined in Identifier List
The function ChkVarIdLst is used to check whether an identifier occurs in a list of identifiers or not. dec ChkVarIdLst : [Identifier -> Ident_list -> Bool]; def ChkVarIdLst = \(i : Identifier, il : Ident_list) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Identifier, il), FF, let hil = @(HEAD, Identifier, il), til = @(TAIL, Identifier, il) in @(GIF_THEN_ELSE, Bool, @(EQ_IDENT, i, hil), @(NOT, @(ChkVarIdLst, i, til)), @(ChkVarIdLst, i, til)));
11.2.1.2
Checking of Variable Defined in Equation
The function ChkVarEq is used to check whether an identifier occurs in a equation or not. dec ChkVarEq : [Identifier -> Equation -> Bool];
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS240
def ChkVarEq = \(i : Identifier, eq : Equation) let idl = @(GET_EQ_IDLIST, eq) in @(ChkVarIdLst, i, idl);
11.2.1.3
Checking of Variable Defined in Equation List
The function ChkVarEqLst is used to check whether an identifier occurs in a list of equations or not. dec ChkVarEqLst : [Identifier -> Eq_list -> Bool]; def ChkVarEqLst = \(i : Identifier, eql : Eq_list) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Equation, eql), FF, let heql = @(HEAD, Equation, eql), teql = @(TAIL, Equation, eql) in let defok = @(ChkVarEq, i, heql) in @(GIF_THEN_ELSE, Bool, defok, @(NOT, @(ChkVarEqLst, i, teql)), @(ChkVarEqLst, i, teql)));
11.2.2
Checking of Local Variable Declaration
11.2.2.1
Checking of Local Declaration
The function ChkLDefEqLst is used to check whether any local variable having one and only one equational definition. dec ChkLDefEqLst : [Local -> Eq_list -> Bool]; def ChkLDefEqLst = \(l : Local, eqdl : Eq_list) @(l, Bool, \(i : Identifier, t : Types) @(ChkVarEqLst, i, eqdl), \(i : Identifier, t : Types, w : Identifier) @(ChkVarEqLst, i, eqdl));
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS241
11.2.2.2
Checking of Local Declaration List
The function ChkLLstEqLst is used to check whether all local variables having one and only one equational definition. dec ChkLLstEqLst : [Local_list -> Eq_list -> Bool]; def ChkLLstEqLst = \(ll : Local_list, eqdl : Eq_list) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Local, ll), TT, let hll = @(HEAD, Local, ll), tll = @(TAIL, Local, ll) in let hllr = @(ChkLDefEqLst, hll, eqdl), tllr = @(ChkLLstEqLst, tll, eqdl) in @(AND, hllr, tllr));
11.2.3
Checking of Output Declaration
11.2.3.1
Checking of Output Declaration
The function ChkODefEqLst is used to check whether any output variable having one and only one equational definition. dec ChkODefEqLst : [Output -> Eq_list -> Bool]; def ChkODefEqLst = \(o : Output, eqdl : Eq_list) @(o, Bool, \(i : Identifier, t : Types) @(ChkVarEqLst, i, eqdl), \(i : Identifier, t : Types, w : Identifier) @(ChkVarEqLst, i, eqdl));
11.2.3.2
Checking of Output Declaration List
The function ChkOLstEqLst is used to check whether all output variables having one and only one equational definition. dec ChkOLstEqLst : [Output_list -> Eq_list -> Bool]; def ChkOLstEqLst = \(olst : Output_list, eqdl : Eq_list)
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS242
@(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Output, olst), TT, let hol = @(HEAD, Output, olst), tol = @(TAIL, Output, olst) in let holr = @(ChkODefEqLst, hol, eqdl), tolr = @(ChkOLstEqLst, tol, eqdl) in @(AND, holr, tolr));
11.2.4
Checking of Node Definition
This is to check the absence of recursive node call. In view of obtaining automata-like executable programs, LUSTRE allows up to now only static networks to be described. 11.2.4.1
Checking of Node Definition
dec NodDefChk : [Node_Def -> Bool]; def NodDefChk = \(d : Node_Def) let head = @(GET_NODE_HEAD, d), llst = @(GET_NODE_LLST, d), eqls = @(GET_NODE_EQLS, d) in let olst = @(GET_HEAD_OULS, head) in @(AND, @(ChkLLstEqLst, llst, eqls), @(ChkOLstEqLst, olst, eqls));
11.2.4.2
Checking of Node Definition List
Each node will be checked one by one. dec NodDefLstChk : [Node_Def_List -> Bool]; def NodDefLstChk = \(l : @(List, Node_Def)) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Node_Def, l), TT, let hl = @(HEAD, Node_Def, l), tl = @(TAIL, Node_Def, l) in
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS243
let hlr = @(NodDefChk, hl), tlr = @(NodDefLstChk, tl) in @(AND, hlr, tlr));
11.3
Initialization Checking
Absence of uninitialized expressions (yielding nil values). Such expressions are accepted as far as these do not concern clocks, outputs, and assertions.
11.3.1
Initialization Checking of Value Stream
dec SCADE_VAL_CHK : [@(TStream, SDVal) -> Bool]; def SCADE_VAL_CHK = \(ts : @(TStream, SDVal)) let hts = @(FST, ts), tts = @(SND, ts) in let hv = @(PJ1, SDVal, Bool, hts), tv = @(PJ2, SDVal, Bool, hts) in @(GIF_THEN_ELSE, Bool, tv, @(IS_ERR_SDVAL, hv), FF);
11.3.2
Initialization Checking of Value Stream List
dec SCADE_VAL_LST_CHK : [SDVal_list -> Bool]; def SCADE_VAL_LST_CHK = \(sdvalst : SDVal_list) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, @(TStream, SDVal), sdvalst), TT, let hsdvalst = @(HEAD, @(TStream, SDVal), sdvalst), tsdvalst = @(TAIL, @(TStream, SDVal), sdvalst) in @(AND, @(SCADE_VAL_CHK, hsdvalst), @(SCADE_VAL_LST_CHK, tsdvalst)));
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS244
11.3.3
Initialization Checking of Expression
def SDEXPR_INIT_CHK = \(e : Expression, r : Env) let sdvalst = @(SDEXPR, e, r) in @(SCADE_VAL_LST_CHK, sdvalst);
11.4
Clock Calculus
This section describes the clock consistency checking - clock calculus.
11.4.1
The Why of Clock Calculus
Let us discuss now the clock calculus which represents an original aspect of LUSTRE with respect to dataflow languages. The following program illustrates the reason for such a calculus: b = true -> not pre(b); y = x + (x when b); In the second equation, a data operator combines two flows of distinct clocks. According to standard dataflow philosophy, such a program has a meaning. However, it is easy to see that the computation of the 2nth value of y needs both the 2nth and the nth values of x. Since a reactive system may be assumed to run forever, its required memory will certainly overflow. Such a program could not be compiled into a bounded memory object code, not speak of the physical incoherence consisting of adding something at time n with something at time 2n. The clock calculus consists of associating a clock with each expression of the program, and of checking that any operator applies to appropriately clocked operands: • Any primitive operator with more than one argument applies to operands sharing the “same” clock; • The clock of any operand of a current operator is not the basic clock of the node it belongs to; • The clocks of a node operands should obey the clocks requirements stated in the node definition header. Let us define here what we mean by “the same clock.” Ideally, it could mean the same Boolean flow, but this may require semantic analysis which are undecidable in general. Thus the compiler uses a more restricted notion of equality: two
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS245
Boolean expressions define the same clock if and only if these can be unified by means of syntactical substitutions. Consider the following example: x = a when (y > z); y = b + c; u = d when (b + c > z); v = e when (z < y); x and u share the same clock, which is considered to be distinct from the clock of v although semantically they are completely the same.
11.4.2
Semantic Equality
11.4.2.1
Clock Checking of Two Value Streams
dec SDVAL_CHK2 : [@(TStream, SDVal) -> @(TStream, SDVal) -> Bool]; def SDVAL_CHK2 = \(ts1 : @(TStream, SDVal), ts2 : @(TStream, SDVal)) let hts1 = @(FST, ts1), tts1 = @(SND, ts1), hts2 = @(FST, ts2), tts2 = @(SND, ts2) in let hvv1 = @(PJ2, SDVal, Bool, hts1), hvv2 = @(PJ2, SDVal, Bool, hts2) in @(AND, @(AND, hvv1, hvv2), @(SDVAL_CHK2, tts1, tts2));
11.4.2.2
Clock Checking of Value Stream List
dec SDVALST_CHK : [SDVal_list -> Bool]; def SDVALST_CHK = \(sdvl : SDVal_list) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, @(TStream, SDVal), sdvl), TT, let hsdvl = @(HEAD, @(TStream, SDVal), sdvl), tsdvl = @(TAIL, @(TStream, SDVal), sdvl) in @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, @(TStream, SDVal), tsdvl),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS246
TT, let htsdvl = @(HEAD, @(TStream, SDVal), tsdvl), ttsdvl = @(TAIL, @(TStream, SDVal), tsdvl) in @(AND, @(SDVAL_CHK2, hsdvl, htsdvl), @(SDVALST_CHK, tsdvl))));
11.4.2.3
Clock Checking of Two Value Stream Lists
dec SDVALST2_CHK : [SDVal_list -> SDVal_list -> Bool]; def SDVALST2_CHK = \(sdvl1 : SDVal_list, sdvl2 : SDVal_list) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, @(TStream, SDVal), sdvl1), @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, @(TStream, SDVal), sdvl2), TT, FF), @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, @(TStream, SDVal), sdvl2), FF, let hsdvl1 = @(HEAD, @(TStream, SDVal), tsdvl1 = @(TAIL, @(TStream, SDVal), hsdvl2 = @(HEAD, @(TStream, SDVal), tsdvl2 = @(TAIL, @(TStream, SDVal), @(AND, @(SDVAL_CHK2, hsdvl1, hsdvl2), @(SDVALST2_CHK, tsdvl1, tsdvl2))));
11.4.2.4
sdvl1), sdvl1), sdvl2), sdvl2) in
Basic Clock Checking of Two Value Streams
The function CHK BASIC CL checks whether two given value streams share the same time stamps. dec CHK_BASIC_CL : [@(TStream, SDVal) -> @(TStream, SDVal) -> Bool]; def CHK_BASIC_CL = \(ts1 : @(TStream, SDVal), ts2 : @(TStream, SDVal)) let hts1 = @(FST, ts1), tts1 = @(SND, ts1), hts2 = @(FST, ts2), tts2 = @(SND, ts2) in
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS247
let vhts1 = @(PJ1, SDVal, Bool, hts1), thts1 = @(PJ2, SDVal, Bool, hts1), vhts2 = @(PJ1, SDVal, Bool, hts2), thts2 = @(PJ2, SDVal, Bool, hts2) in @(GIF_THEN_ELSE, Bool, @(AND, thts1, thts2), @(CHK_BASIC_CL, tts1, tts2), @(GIF_THEN_ELSE, Bool, @(AND, @(NOT, thts1), @(NOT, thts2)), @(CHK_BASIC_CL, tts1, tts2), FF));
11.4.2.5
Non Basic Clock Checking of Value Streams
dec CHK_NOT_BASIC_CL : [SDVal_list -> @(TStream, SDVal) -> Bool]; def CHK_NOT_BASIC_CL = \(svl : SDVal_list, basic_clock : @(TStream, SDVal)) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, @(TStream, SDVal), svl), TT, let hsvl = @(HEAD, @(TStream, SDVal), svl), tsvl = @(TAIL, @(TStream, SDVal), svl) in @(AND, @(CHK_BASIC_CL, hsvl, basic_clock), @(CHK_NOT_BASIC_CL, tsvl, basic_clock)));
11.4.2.6
Clock Checking of Expression List
dec SD_CLCAL_EXPR : [Expression -> Env -> @(TStream, SDVal) -> Bool]; dec SD_CLCAL_EXPRLST : [Expr_list -> Env -> @(TStream, SDVal) -> Bool]; def SD_CLCAL_EXPRLST = \(el : Expr_list, r : Env, basic_clock : @(TStream, SDVal)) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Expression, el), TT, let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in @(AND,
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS248
@(SD_CLCAL_EXPR, hel, r, basic_clock), @(SD_CLCAL_EXPRLST, tel, r, basic_clock)));
11.4.2.7
Clock Checking of If-Expression
dec CHK_IF : [@(TStream, SDVal) -> @(TStream, SDVal) -> @(TStream, SDVal) -> Bool]; def CHK_IF = \(b : @(TStream, SDVal), s : @(TStream, SDVal), t : @(TStream, SDVal)) let hb = @(FST, b), tb = @(SND, b), hs = @(FST, s), ts = @(SND, s), ht = @(FST, t), tt = @(SND, t) in let bv = @(GIF_THEN_ELSE0, Bool, @(PJ2, SDVal, Bool, hb), @(OR, @(PJ2, SDVal, Bool, hs), @(PJ2, SDVal, Bool, ht)), @(AND, @(NOT, @(PJ2, SDVal, Bool, hs)), @(NOT, @(PJ2, SDVal, Bool, ht)))) in @(AND, bv, @(CHK_IF, tb, ts, tt));
11.4.2.8
Clock Checking of If-Expression List
dec CHK_IF_LIST : [SDVal_list -> SDVal_list -> SDVal_list -> Bool]; def CHK_IF_LIST = \(ifl : SDVal_list, thl : SDVal_list, ell : SDVal_list) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY_SDL, ifl), TT, let hifl = @(HEAD_SDL, ifl), tifl = @(TAIL_SDL, ifl), hthl = @(HEAD_SDL, thl), tthl = @(TAIL_SDL, thl), hell = @(HEAD_SDL, ell), tell = @(TAIL_SDL, ell) in let nv = @(CHK_IF, hifl, hthl, hell),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS249
nl = @(CHK_IF_LIST, tifl, tthl, tell) in @(AND, nv, nl));
11.4.2.9
Clock Checking of Expression
def SD_CLCAL_EXPR = \(e : Expression, r : Env, basic_clock : @(TStream, SDVal)) @(e, Bool, %-- Boolean Expressions --% \(b : Bool) TT, %-- Identifier --% \(var : Identifier) TT, %-- Integer --% \(i : Int) TT, %-- Null --% TT, %-- Tuple --% \(el : Expr_list) @(GIF_THEN_ELSE, Bool, @(SD_CLCAL_EXPRLST, el, r, basic_clock), let svl = @(SDEXPRLIST, el, r) in @(SDVALST_CHK, svl), FF), %-- Projection --% \(x : Bool, n : Nat) TT, %-- Call Operation --% \(i : Identifier, el : Expr_list) @(GIF_THEN_ELSE, Bool, @(SD_CLCAL_EXPRLST, el, r, basic_clock), let svl = @(SDEXPRLIST, el, r) in @(SDVALST_CHK, svl), FF), %-- Binary --% \(o : Binop, x1 : Bool, x2 : Bool) let e1 = @(GET_EXPR_BIN_LEXP, e), e2 = @(GET_EXPR_BIN_REXP, e) in @(GIF_THEN_ELSE, Bool,
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS250
@(AND, @(SD_CLCAL_EXPR, e1, r, basic_clock), @(SD_CLCAL_EXPR, e2, r, basic_clock)), let lsvl = @(SDEXPR, e1, r), rsvl = @(SDEXPR, e2, r) in @(SDVALST2_CHK, lsvl, rsvl), FF), %-- Unary --% \(o : Unop, x : Bool) let exp = @(GET_EXPR_UNEXP, e) in @(SD_CLCAL_EXPR, exp, r, basic_clock), %-- PRE --% \(x : Bool) let pe = @(GET_EXPR_PRE, e) in @(SD_CLCAL_EXPR, pe, r, basic_clock), %-- Initialization --% \(x1 : Bool, x2 : Bool) let fi = @(GET_INIT_INIT, e), ff = @(GET_INIT_FOLLOW, e) in @(AND, @(SD_CLCAL_EXPR, fi, r, basic_clock), @(SD_CLCAL_EXPR, ff, r, basic_clock)), %-- Current --% \(x : Bool) let pe = @(GET_EXPR_CUR, e) in let svl = @(SDEXPR, pe, r) in @(AND, @(CHK_NOT_BASIC_CL, svl, basic_clock), @(SD_CLCAL_EXPR, pe, r, basic_clock)), %-- When --% \(x1 : Bool, x2 : Bool) let fi = @(GET_WHEN_EXPR, e), ff = @(GET_WHEN_COND, e) in @(AND, @(SD_CLCAL_EXPR, fi, r, basic_clock), @(SD_CLCAL_EXPR, ff, r, basic_clock)), %-- If-Then-Else --% \(x1 : Bool, x2 : Bool, x3 : Bool) let if = @(GET_IF_COND, e), th = @(GET_IF_THEN, e), el = @(GET_IF_ELSE, e) in let ifsvl = @(SDEXPR, if, r), thsvl = @(SDEXPR, th, r), elsvl = @(SDEXPR, el, r) in let bifsvl = @(SD_CLCAL_EXPR, if, r, basic_clock), bthsvl = @(SD_CLCAL_EXPR, th, r, basic_clock), belsvl = @(SD_CLCAL_EXPR, el, r, basic_clock) in
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS251
@(AND, @(CHK_IF_LIST, ifsvl, thsvl, elsvl), @(AND, bifsvl, @(AND, bthsvl, belsvl))));
11.5
Substitution
11.5.1
Substitution of Expression
dec SUBST_EXPR_LIST : [Expr_list -> Input_par -> Expression -> Expr_list]; def DO_SUBST_EXPR = \(e : Expression, ip : Input_par, ie : Expression) @(e, Expression, \(b : Bool) e, \(i : Identifier) @(ip, Expression, \(ii : Identifier, t : Types) @(GIF_THEN_ELSE, Expression, @(EQ_IDENT, i, ii), ie, e), \(ii : Identifier, t : Types, w : Identifier) @(GIF_THEN_ELSE, Expression, @(EQ_IDENT, i, ii), @(MK_WHEN, ie, @(MK_IDENT, w)), e)), \(i : Int) e, e, \(el : Expr_list) @(MK_TUPLE, @(SUBST_EXPR_LIST, el, ip, ie)), \(x : Expression, n : Nat) let pe = @(GET_POS_EXPR, e) in let npe = @(DO_SUBST_EXPR, pe, ip, ie) in @(MK_POS, npe, n), \(f : Identifier, el : Expr_list) let nel = @(SUBST_EXPR_LIST, el, ip, ie) in @(MK_CALL, f, nel), \(o : Binop, x1 : Expression, x2 : Expression) let le = @(GET_EXPR_BIN_LEXP, e), re = @(GET_EXPR_BIN_REXP, e) in
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS252
let nle = @(DO_SUBST_EXPR, le, ip, ie), nre = @(DO_SUBST_EXPR, re, ip, ie) in @(MK_BINOP, o, nle, nre), \(o : Unop, x : Expression) let ue = @(GET_EXPR_UNEXP, e) in let nue = @(DO_SUBST_EXPR, ue, ip, ie) in @(MK_UNOP, o, nue), \(x : Expression) let pe = @(GET_EXPR_PRE, e) in let npe = @(DO_SUBST_EXPR, pe, ip, ie) in @(MK_PRE, npe), \(x1 : Expression, x2 : Expression) let ei = @(GET_INIT_INIT, e), ef = @(GET_INIT_FOLLOW, e) in let nei = @(DO_SUBST_EXPR, ei, ip, ie), nef = @(DO_SUBST_EXPR, ef, ip, ie) in @(MK_INIT, nei, nef), \(x : Expression) let ce = @(GET_EXPR_CUR, e) in let nce = @(DO_SUBST_EXPR, ce, ip, ie) in @(MK_CURRENT, nce), \(x1 : Expression, x2 : Expression) let we = @(GET_WHEN_EXPR, e), wc = @(GET_WHEN_COND, e) in let nwe = @(DO_SUBST_EXPR, we, ip, ie), nwc = @(DO_SUBST_EXPR, wc, ip, ie) in @(MK_WHEN, nwe, nwc), \(x1 : Expression, x2 : Expression, x3 : Expression) let ife = @(GET_IF_COND, e), the = @(GET_IF_THEN, e), ele = @(GET_IF_ELSE, e) in let nife = @(DO_SUBST_EXPR, ife, ip, ie), nthe = @(DO_SUBST_EXPR, the, ip, ie), nele = @(DO_SUBST_EXPR, ele, ip, ie) in @(MK_IF, nife, nthe, nele)); dec SUBST_EXPR : [Expression -> Input_par_list -> Expr_list -> Expression]; def SUBST_EXPR = \(e : Expression, ipl : Input_par_list, el : Expr_list) @(GIF_THEN_ELSE, Expression, @(IS_EMPTY, Input_par, ipl), @(GIF_THEN_ELSE, Expression, @(IS_EMPTY, Expression, el),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS253
e, ERR_EXPR), @(GIF_THEN_ELSE, Expression, @(IS_EMPTY, Expression, el), ERR_EXPR, let hipl = @(HEAD, Input_par, ipl), tipl = @(TAIL, Input_par, ipl), hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let ne = @(DO_SUBST_EXPR, e, hipl, hel) in @(SUBST_EXPR, ne, tipl, tel)));
11.5.2
Substitution of Expression List
dec DO_SUBST_EXPR : [Expression -> Input_par -> Expression -> Expression]; def SUBST_EXPR_LIST = \(el : Expr_list, ip : Input_par, ie : Expression) @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el), @(NIL, Expression), let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let nhel = @(DO_SUBST_EXPR, hel, ip, ie), ntel = @(SUBST_EXPR_LIST, tel, ip, ie) in @(CONS, Expression, nhel, ntel));
11.5.3
Substitution of Equation
dec SUBST_EQ : [Equation -> Input_par_list -> Expr_list -> Equation]; def SUBST_EQ = \(eq : Equation, ipl : Input_par_list, el : Expr_list) @(eq, Equation, \(il : Ident_list, e : Expression) let ne = @(SUBST_EXPR, e, ipl, el) in @(MK_EQ, il, ne));
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS254
11.5.4
Substitution of Equation List
dec SUBST_EQ_LIST : [Eq_list -> Input_par_list -> Expr_list -> Eq_list]; def SUBST_EQ_LIST = \(eql : Eq_list, ipl : Input_par_list, el : Expr_list) @(GIF_THEN_ELSE, Eq_list, @(IS_EMPTY, Equation, eql), @(NIL, Equation), let heql = @(HEAD, Equation, eql), teql = @(TAIL, Equation, eql) in let nheql = @(SUBST_EQ, heql, ipl, el), nteql = @(SUBST_EQ_LIST, teql, ipl, el) in @(CONS, Equation, nheql, nteql));
11.5.5
Substitution of Node Definition
dec SUBST_NODE_DEF : [Node_Def -> Expr_list -> Node_Def]; def SUBST_NODE_DEF = \(nd : Node_Def, el : Expr_list) @(nd, Node_Def, \(hn : Header_Node, ll : Local_list, eqdl : Eq_list) let ipl = @(GET_HEAD_INLS, hn) in @(MK_NODE_DEF, hn, ll, @(SUBST_EQ_LIST, eqdl, ipl, el)));
11.6
Renaming
11.6.1
Renaming of Local Declaration
11.6.1.1
Renaming of Local Declaration
dec RENAME_LOCAL : [Local -> Local]; def RENAME_LOCAL = \(l : Local) @(l, Local, \(i : Identifier, t : Types)
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS255
let ni = NEW_NAME in @(MK_LOCAL, ni, t), \(i : Identifier, t : Types, w : Identifier) let ni = NEW_NAME, nw = NEW_NAME in @(MK_WLOCAL, ni, t, nw));
11.6.1.2
Renaming of Local Declaration List
dec RENAME_LOCAL_LIST : [Local_list -> Local_list]; def RENAME_LOCAL_LIST = \(ll : Local_list) @(GIF_THEN_ELSE, Local_list, @(IS_EMPTY, Local, ll), @(NIL, Local), let hll = @(HEAD, Local, ll), tll = @(TAIL, Local, ll) in let nhll = @(RENAME_LOCAL, hll), ntll = @(RENAME_LOCAL_LIST, tll) in @(CONS, Local, nhll, ntll));
11.6.2
Translation from Local Declaration to Expression
11.6.2.1
Translation from Local Declaration to Expression
dec TRAN_LOC_EXP : [Local -> Expression]; def TRAN_LOC_EXP = \(l : Local) @(l, Expression, \(i : Identifier, t : Types) @(MK_IDENT, i), \(i : Identifier, t : Types, w : Identifier) @(MK_WHEN, @(MK_IDENT, i), @(MK_IDENT, w)));
11.6.2.2
Translation from Local Declaration List to Expression List
dec TRAN_LLST_ELST : [Local_list -> Expr_list]; def TRAN_LLST_ELST =
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS256
\(ll : Local_list) @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Local, ll), @(NIL, Expression), let hll = @(HEAD, Local, ll), tll = @(TAIL, Local, ll) in let ehll = @(TRAN_LOC_EXP, hll), etll = @(TRAN_LLST_ELST, tll) in @(CONS, Expression, ehll, etll));
11.6.3
Renaming of Node Definition
dec RENAME_NODE_DEF : [Node_Def -> Node_Def]; def RENAME_NODE_DEF = \(nd : Node_Def) @(nd, Node_Def, \(hn : Header_Node, ll : Local_list, eqdl : Eq_list) let nll = @(RENAME_LOCAL_LIST, ll), nel = @(TRAN_LLST_ELST, nll), neqdl = @(SUBST_EQ_LIST, eqdl, ll, nel) in @(MK_NODE_DEF, hn, nll, neqdl));
11.7
Directed Acyclic Variable Graph and Topological Sorting
11.7.1
Directed Acyclic Variable Graph
A directed acyclic variable graph will be constructed according to the following rule: There is an edge from vertex Vy to vertex Vx if there is an equation x = e; where variable y occurs in e and there is another equation y = e’. The vertex in the graph will be the variables v, pre(v), pre(pre(v)) .... We have pre(x + y) = pre(x) + pre(y). Ordering: if x = . . . y . . .; then y < x. So it will be y = ...;
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS257
... x = ... y ...;
11.7.1.1
Definition of PIdent
A type PIdent is defined for representing pre(x), pre(pre(x)), .... def PIdent = !(T : Prop) [[Identifier -> T] -> [T -> T] -> T]; dec MK_IDENT : [Identifier -> PIdent]; def MK_IDENT = \(i : Identifier) \(T : Prop) \(x : [Identifier -> T], y : [T -> T]) @(x, i); dec MK_PIDENT : [PIdent -> PIdent]; def MK_PIDENT = \(pi : PIdent) \(T : Prop) \(x : [Identifier -> T], y : [T -> T]) @(pi, T, x, y); def PIdent_list = @(List, PIdent);
11.7.1.2
Checking of Occurrance of Identifier in Equations
dec IS_IN_EQ_LIST : [Identifier -> Eq_list -> Bool]; def IS_IN_EQ_LIST = \(i : Identifier, eql : Eq_list) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Equation, eql), FF, let heql = @(HEAD, Equation, eql), teql = @(TAIL, Equation, eql) in let idl = @(GET_EQ_IDLIST, heql) in let id = @(HEAD, Identifier, idl) in @(GIF_THEN_ELSE,
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS258
Bool, @(EQ_IDENT, id, i), TT, @(IS_IN_EQ_LIST, i, teql)));
11.7.1.3
Finding All of PIdent
dec ExprIdEdge : [Expression -> Eq_list -> PIdent_list]; dec ExprLstIdEdge : [Expr_list -> Eq_list -> PIdent_list]; def ExprLstIdEdge = \(el : Expr_list, geql : Eq_list) @(GIF_THEN_ELSE, PIdent_list, @(IS_EMPTY, Expression, el), @(NIL, PIdent), let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let il1 = @(ExprIdEdge, hel, geql), il2 = @(ExprLstIdEdge, tel, geql) in @(CONCAT, PIdent, il1, il2)); dec DO_PRE_IDENT_LIST : [PIdent_list -> PIdent_list]; def ExprIdEdge = \(e : Expression, geql : Eq_list) @(e, PIdent_list, \(b : Bool) @(NIL, PIdent), \(i : Identifier) @(GIF_THEN_ELSE, PIdent_list, @(IS_IN_EQ_LIST, i, geql), @(CONS, PIdent, @(MK_IDENT, i), @(NIL, PIdent)), @(NIL, PIdent)), \(i : Int) @(NIL, PIdent), @(NIL, PIdent), \(el : Expr_list) @(ExprLstIdEdge, el, geql), \(x : PIdent_list, n : Nat) let pe = @(GET_POS_EXPR, e) in @(ExprIdEdge, pe, geql), \(fn : Identifier, el : Expr_list) @(NIL, PIdent),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS259
\(o : Binop, x1 : PIdent_list, x2 : PIdent_list) let le = @(GET_EXPR_BIN_LEXP, e), re = @(GET_EXPR_BIN_REXP, e) in let il1 = @(ExprIdEdge, le, geql), il2 = @(ExprIdEdge, re, geql) in @(CONCAT, PIdent, il1, il2), \(o : Unop, x : PIdent_list) let ue = @(GET_EXPR_UNEXP, e) in @(ExprIdEdge, ue, geql), \(x : PIdent_list) let pe = @(GET_EXPR_PRE, e) in @(GIF_THEN_ELSE, PIdent_list, @(IS_IDENT_EXPR, pe), let peid = @(GET_EXPR_IDENT, pe) in @(GIF_THEN_ELSE, PIdent_list, @(IS_IN_EQ_LIST, peid, geql), @(CONS, PIdent, @(MK_PIDENT, @(MK_IDENT, peid)), @(NIL, PIdent)), let idl = @(ExprIdEdge, pe, geql) in @(DO_PRE_IDENT_LIST, idl)), let idl = @(ExprIdEdge, pe, geql) in @(DO_PRE_IDENT_LIST, idl)), \(x1 : PIdent_list, x2 : PIdent_list) let ei = @(GET_INIT_INIT, e), ef = @(GET_INIT_FOLLOW, e) in let lei = @(ExprIdEdge, ei, geql), lef = @(ExprIdEdge, ef, geql) in @(CONCAT, PIdent, lei, lef), \(x : PIdent_list) let ce = @(GET_EXPR_CUR, e) in @(ExprIdEdge, ce, geql), \(x1 : PIdent_list, x2 : PIdent_list) let we = @(GET_WHEN_EXPR, e), wc = @(GET_WHEN_COND, e) in let lwe = @(ExprIdEdge, we, geql), lwc = @(ExprIdEdge, wc, geql) in @(CONCAT, PIdent, lwe, lwc), \(x1 : PIdent_list, x2 : PIdent_list, x3 : PIdent_list) let ife = @(GET_IF_COND, e), the = @(GET_IF_THEN, e), ele = @(GET_IF_ELSE, e) in let life = @(ExprIdEdge, ife, geql), lthe = @(ExprIdEdge, the, geql),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS260
lele = @(ExprIdEdge, ele, geql) in @(CONCAT, PIdent, life, @(CONCAT, PIdent, lthe, lele)));
11.7.1.4
Greation of Graph of PIdent
dec DoEqIdGraph : [Equation -> Eq_list -> @(Graph, PIdent) -> @(Graph, PIdent)]; def DoEqIdGraph = \(eq : Equation, geql : Eq_list, g : @(Graph, PIdent)) @(eq, @(Graph, PIdent), \(il : Ident_list, e : Expression) let id = @(HEAD, Identifier, il) in let neql = @(ExprIdEdge, e, geql) in @(GINSERT_NODE_EDGE, PIdent, @(MK_IDENT, id), neql, g)); dec DoEqLstIdGraph : [Eq_list -> Eq_list -> @(Graph, PIdent) -> @(Graph, PIdent)]; def DoEqLstIdGraph = \(l : Eq_list, geql : Eq_list, g : @(Graph, PIdent)) @(GIF_THEN_ELSE, @(Graph, PIdent), @(IS_EMPTY, Equation, l), g, let hl = @(HEAD, Equation, l), tl = @(TAIL, Equation, l) in let ng = @(DoEqIdGraph, hl, geql, g) in @(DoEqLstIdGraph, tl, geql, ng)); def CreatIdentGraph = \(eql : Eq_list) @(DoEqLstIdGraph, eql, eql, @(GNIL_GRAPH, PIdent)); def NodeIdGraph = \(nd : Node_Def) @(nd, @(Graph, PIdent), \(hn : Header_Node, ll : Local_list, eqdl : Eq_list) @(CreatIdentGraph, eqdl));
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS261
11.7.2
Topological Sorting
The process of constructing a linear order such as < is called topological sorting. Input description: A directed acyclic graph G = (V, E), also known as a partial order or poset. Problem description: Find a linear ordering of the vertices of V such that for each edge, vertex i is to the left of vertex j. Discussion: Topological sorting arises as a natural subproblem in most algorithms on directed acyclic graphs. Topological sorting orders the vertices and edges of a DAG in a simple and consistent way and hence plays the same role for DAGs that depth-first search does for general graphs. Topological sorting can be used to schedule tasks under precedence constraints. Suppose we have a set of tasks to do, but certain tasks have to be performed before other tasks. These precedence constraints form a directed acyclic graph, and any topological sort (also known as a linear extension) defines an order to do these tasks such that each is performed only after all of its constraints are satisfied. Three important facts about topological sorting are: 1. Only directed acyclic graphs can have linear extensions, since any directed cycle is an inherent contradiction to a linear order of tasks. 2. Every DAG can be topologically sorted, so there must always be at least one schedule for any reasonable precedence constraints among jobs. 3. DAGs typically allow many such schedules, especially when there are few constraints. Consider n jobs without any constraints. Any of the n! permutations of the jobs constitutes a valid linear extension. A linear extension of a given DAG is easily found in linear time. The basic algorithm performs a depth-first search of the DAG to identify the complete set of source vertices, where source vertices are vertices without incoming edges. At least one such source must exist in any DAG. Note that source vertices can appear at the start of any schedule without violating any constraints. After deleting all the outgoing edges of the source vertices, we will create new source vertices, which can sit comfortably to the immediate right of the first set. We repeat until all vertices have been accounted for. Only a modest amount of care with data structures (adjacency lists and queues) is needed to make this run in O(n+m) time. This algorithm is simple enough that you should be able to code up your own implementation and expect good performance, although implementations are described below. Two special considerations are: • What if I need all the linear extensions, instead of just one of them? In certain applications, it is important to construct all linear extensions of a DAG. Beware, because the number of linear extensions can grow
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS262
exponentially in the size of the graph. Even the problem of counting the number of linear extensions is NP-hard. • Algorithms for listing all linear extensions in a DAG are based on backtracking. They build all possible orderings from left to right, where each of the in-degree zero vertices are candidates for the next vertex. The outgoing edges from the selected vertex are deleted before moving on. Constructing truly random linear extensions is a hard problem, but pseudo-random orders can be constructed from left to right by selecting randomly among the in-degree zero vertices. What if your graph is not acyclic? - When the set of constraints is not a DAG, but it contains some inherent contradictions in the form of cycles, the natural problem becomes to find the smallest set of jobs or constraints that if eliminated leaves a DAG. These smallest sets of offending jobs (vertices) or constraints (edges) are known as the feedback vertex set and the feedback arc set, respectively, and are discussed in Section. Unfortunately, both of them are NP-complete problems. Since the basic topological sorting algorithm will get stuck as soon as it identifies a vertex on a directed cycle, we can delete the offending edge or vertex and continue. This quick-and-dirty heuristic will eventually leave a DAG. dec DO_FIND_SRC_VTX : [@(Graph, PIdent) -> @(Graph, PIdent) -> @(Product, PIdent, PIdent_list)]; def DO_FIND_SRC_VTX = \(lg : @(Graph, PIdent), gg : @(Graph, PIdent)) @(GIF_THEN_ELSE, @(Product, PIdent, PIdent_list), @(IS_EMPTY, @(Product, PIdent, PIdent_list), lg), @(ERR, @(Product, PIdent, PIdent_list)), let hlg = @(HEAD, @(Product, PIdent, PIdent_list), lg), tlg = @(TAIL, @(Product, PIdent, PIdent_list), lg) in let pid = @(PJ1, PIdent, PIdent_list, hlg) in @(GIF_THEN_ELSE, @(Product, PIdent, PIdent_list), @(NODE_IN_GRAPH, PIdent, pid, gg), @(DO_FIND_SRC_VTX, tlg, gg), hlg)); def FIND_SOURCE_VERTEX = \(g : @(Graph, PIdent)) @(DO_FIND_SRC_VTX, g, g); dec TOPOLOGY_SORT : [@(Graph, PIdent) -> @(Graph, PIdent)];
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS263
def TOPOLOGY_SORT = \(g : @(Graph, PIdent)) let vertex = @(FIND_SOURCE_VERTEX, g) in let node = @(PRO1, PIdent, PIdent_list, vertex), edge = @(PRO2, PIdent, PIdent_list, vertex) in let ng = @(GDELETE_NODE, PIdent, node, g) in let nng = @(TOPOLOGY_SORT, ng) in @(GINSERT_NODE_EDGE, PIdent, node, edge, nng);
11.8
Checking of Recursive Node Call
Absence of recursive node call: in view of obtaining automata-like executable programs, LUSTRE allows up to now only static networks to be described. The problem of structuring recursive calls so that the above property is maintained, has not yet been investigated.
11.8.1
Name Occurrence Checking
11.8.1.1
Name Occurrence Checking in Expression
dec NamOccExprLst : [Identifier -> Expr_list -> Bool]; dec NamOccExpr : [Identifier -> Expression -> Bool]; def NamOccExpr = \(i : Identifier, expr : Expression) @(expr, Bool, \(b : Bool) FF, \(v : Identifier) FF, \(int : Int) FF, FF, \(el : Expr_list) @(NamOccExprLst, i, el), \(x : Bool, n : Nat) let e = @(GET_POS_EXPR, expr) in @(NamOccExpr, i, e), \(nn : Identifier, el : Expr_list) @(GIF_THEN_ELSE, Bool, @(EQ_IDENT, i, nn),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS264
TT, @(NamOccExprLst, i, el)), \(o : Binop, x1 : Bool, x2 : Bool) let e1 = @(GET_EXPR_BIN_LEXP, expr), e2 = @(GET_EXPR_BIN_REXP, expr) in @(OR, @(NamOccExpr, i, e1), @(NamOccExpr, i, e2)), \(o : Unop, x : Bool) let e = @(GET_EXPR_UNEXP, expr) in @(NamOccExpr, i, e), \(x : Bool) let e = @(GET_EXPR_PRE, expr) in @(NamOccExpr, i, e), \(x1 : Bool, x2 : Bool) let e1 = @(GET_INIT_INIT, expr), e2 = @(GET_INIT_FOLLOW, expr) in @(OR, @(NamOccExpr, i, e1), @(NamOccExpr, i, e2)), \(x : Bool) let e = @(GET_EXPR_CUR, expr) in @(NamOccExpr, i, e), \(x1 : Bool, x2 : Bool) %-- When --% let e1 = @(GET_WHEN_EXPR, expr), e2 = @(GET_WHEN_COND, expr) in @(OR, @(NamOccExpr, i, e1), @(NamOccExpr, i, e2)), \(x1 : Bool, x2 : Bool, x3 : Bool) %-- If-Then-Else --% let e1 = @(GET_IF_COND, expr), e2 = @(GET_IF_THEN, expr), e3 = @(GET_IF_ELSE, expr) in @(OR, @(NamOccExpr, i, e1), @(OR, @(NamOccExpr, i, e2), @(NamOccExpr, i, e3))));
11.8.1.2
Name Occurrance Checking in Expression List
def NamOccExprLst = \(i : Identifier, el : Expr_list) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Expression, el), FF, let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in @(OR, @(NamOccExpr, i, hel), @(NamOccExprLst, i, tel)));
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS265
11.8.1.3
Name Occurrance Checking in Equation
dec NamOccEq : [Identifier -> Equation -> Bool]; def NamOccEq = \(i : Identifier, eq : Equation) let expr = @(GET_EQ_BODY, eq) in @(NamOccExpr, i, expr);
11.8.1.4
Name Occurrance Checking in Equation List
dec NamOccEqLst : [Identifier -> Eq_list -> Bool]; def NamOccEqLst = \(i : Identifier, eql : Eq_list) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Equation, eql), FF, let heql = @(HEAD, Equation, eql), teql = @(TAIL, Equation, eql) in @(OR, @(NamOccEq, i, heql), @(NamOccEqLst, i, teql)));
11.8.2
Construction of Node Call Graph
A node call graph will be constructed. Then the algorithm for finding a strongly connected component will be applied for checking if there exists a node recursive call. 11.8.2.1
Checking of Node Name Called by Node Definition
dec NodDefCallChk : [Identifier -> Node_Def -> Bool]; def NodDefCallChk = \(i : Identifier, nodef : Node_Def) let eqdl = @(GET_NODE_EQLS, nodef) in @(NamOccEqLst, i, eqdl);
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS266
11.8.2.2
Checking of Node Name Called by Node Definition List
dec NodDefLstCallChk : [Identifier -> Node_Def_List -> Node_Def_List -> Node_Def_List]; def NodDefLstCallChk = \(i : Identifier, l : Node_Def_List, ndl : Node_Def_List) @(GIF_THEN_ELSE, Node_Def_List, @(IS_EMPTY, Node_Def, l), ndl, let hl = @(HEAD, Node_Def, l), tl = @(TAIL, Node_Def, l) in @(GIF_THEN_ELSE, Node_Def_List, @(NodDefCallChk, i, hl), let nndl = @(NodDefLstCallChk, i, tl, ndl) in @(CONS, Node_Def, hl, nndl), @(NodDefLstCallChk, i, tl, ndl)));
11.8.2.3
Construction of Node Call Graph from a Node Definition
dec RecNodDDefCalChk : [Node_Def -> Node_Def_List -> @(Graph, Node_Def) -> @(Graph, Node_Def)]; def RecNodDDefCalChk = \(n : Node_Def, ndl : Node_Def_List, g : @(Graph, Node_Def)) let head = @(GET_NODE_HEAD, n) in let nodename = @(GET_HEAD_NAME, head) in let nndl = @(NodDefLstCallChk, nodename, ndl, @(NIL, Node_Def)) in @(GINSERT_NODE_EDGE, Node_Def, n, nndl, g);
11.8.2.4
Construction of Node Call Graph from a Node Definition List
dec RecNodDLstCalChk : [Node_Def_List ->
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS267
Node_Def_List -> @(Graph, Node_Def) -> @(Graph, Node_Def)]; def RecNodDLstCalChk = \(l : Node_Def_List, ndl : Node_Def_List, g : @(Graph, Node_Def)) @(GIF_THEN_ELSE, @(Graph, Node_Def), @(IS_EMPTY, Node_Def, l), g, let hl = @(HEAD, Node_Def, l), tl = @(TAIL, Node_Def, l) in let ng = @(RecNodDDefCalChk, hl, ndl, g) in @(RecNodDLstCalChk, tl, ndl, ng));
11.8.2.5
Construction of Node Call Graph
def RecNodeCallChk = \(ndl : Node_Def_List) @(RecNodDLstCalChk, ndl, ndl, @(GNIL_GRAPH, Node_Def));
11.9
Node Expansion
11.9.1
The Why of Node Expansion
The LUSTRE compiler produces purely sequential code. This raises the question of compiling separated nodes which are used in other nodes. The following example shows this cannot be easily done for LUSTRE: node two_copies(a, b : int) returns (x, y : int); let x = a; y = b; tel. Clearly, there are two possible sequential codes for a basic cycle of this node, either x := a; y := b; or
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS268
y := b; x := a; But the choice between those two programs may depend on the way the node is used within another node; for instance: (x, y) := two_copies(a, x); In this case, only the former program is correct. Thus before compiling a program, the compiler first expands recursively all the nodes called by that program, i.e., formal parameters are substituted with actual ones, local variables are given an unique name (so as to distinguish that node call from other instances of the same node) and then the called node body is inserted into the calling node body. The code generation step will then start from a “flat” node which does not call any other node.
11.9.2
Finding Node in Node Definition
dec FIND_NODE_DEF : [Identifier -> Node_Def_List -> Node_Def]; def FIND_NODE_DEF = \(i : Identifier, ndl : Node_Def_List) @(GIF_THEN_ELSE, Node_Def, @(IS_EMPTY, Node_Def, ndl), ERR_NODE_DEF, let hndl = @(HEAD, Node_Def, ndl), tndl = @(TAIL, Node_Def, ndl) in let head = @(GET_NODE_HEAD, hndl) in let node_name = @(GET_HEAD_NAME, head) in @(GIF_THEN_ELSE, Node_Def, @(EQ_IDENT, node_name, i), hndl, @(FIND_NODE_DEF, i, tndl)));
11.9.3
Expansion of Expression
def ExpEqLoclst = @(Product3, Expression, Local_list, Eq_list);
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS269
def ExpLstEqLoclst = @(Product3, Expr_list, Local_list, Eq_list); def EqLoclst = @(And, Local_list, Eq_list); dec EXPR_EXPAND : [Expression -> TEnv -> Node_Def_List -> ExpEqLoclst]; dec EXPR_LIST_EXPAND : [Expr_list -> TEnv -> Node_Def_List -> ExpLstEqLoclst]; def EXPR_EXPAND = \(e : Expression, r : TEnv, ndl : Node_Def_List) @(e, ExpEqLoclst, \(b : Bool) @(PRODUCT3, Expression, Local_list, Eq_list, e, @(NIL, Local), @(NIL, Equation)), \(i : Identifier) @(PRODUCT3, Expression, Local_list, Eq_list, e, @(NIL, Local), @(NIL, Equation)), \(i : Int) @(PRODUCT3, Expression, Local_list, Eq_list, e, @(NIL, Local), @(NIL, Equation)), @(PRODUCT3, Expression, Local_list, Eq_list, e, @(NIL, Local), @(NIL, Equation)), \(el : Expr_list)
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS270
let expeqlloclst = @(EXPR_LIST_EXPAND, el, r, ndl) in let exl = @(PROJ1, Expr_list, Local_list, Eq_list, expeqlloclst), lol = @(PROJ2, Expr_list, Local_list, Eq_list, expeqlloclst), eql = @(PROJ3, Expr_list, Local_list, Eq_list, expeqlloclst) in let nexp = @(MK_TUPLE, exl) in @(PRODUCT3, Expression, Local_list, Eq_list, nexp, lol, eql), \(x : ExpEqLoclst, n : Nat) let pe = @(GET_POS_EXPR, e) in let expeqloclst = @(EXPR_EXPAND, pe, r, ndl) in let exp = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst), lol = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst), eql = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst) in let nexp = @(MK_POS, exp, n) in @(PRODUCT3, Expression, Local_list, Eq_list,
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS271
nexp, lol, eql), \(fn : Identifier, el : Expr_list) let ndf = @(FIND_NODE_DEF, fn, ndl) in let newndf = @(SUBST_NODE_DEF, ndf, el), rennewndf = @(RENAME_NODE_DEF, newndf) in let nodehead = @(GET_NODE_HEAD, rennewndf), nodellst = @(GET_NODE_LLST, rennewndf), nodeeqls = @(GET_NODE_EQLS, rennewndf) in let olst = @(GET_HEAD_OULS, nodehead) in let nexp = @(MK_TUPLE, @(TRAN_LLST_ELST, olst)), neql = nodeeqls in @(PRODUCT3, Expression, Local_list, Eq_list, nexp, nodellst, neql), \(o : Binop, x1 : ExpEqLoclst, x2 : ExpEqLoclst) let le = @(GET_EXPR_BIN_LEXP, e), re = @(GET_EXPR_BIN_REXP, e) in let expeqloclst1 = @(EXPR_EXPAND, le, r, ndl), expeqloclst2 = @(EXPR_EXPAND, re, r, ndl) in let exp1 = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst1), lol1 = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst1), eql1 = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst1), exp2 = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst2), lol2 = @(PROJ2, Expression,
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS272
Local_list, Eq_list, expeqloclst2), eql2 = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst2) in let nexp = @(MK_BINOP, o, exp1, exp2), nlol = @(CONCAT, Local, lol1, lol2), neql = @(CONCAT, Equation, eql1, eql2) in @(PRODUCT3, Expression, Local_list, Eq_list, nexp, nlol, neql), \(o : Unop, x : ExpEqLoclst) let ue = @(GET_EXPR_UNEXP, e) in let expeqloclst = @(EXPR_EXPAND, ue, r, ndl) in let exp = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst), lol = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst), eql = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst) in let nexp = @(MK_UNOP, o, exp) in @(PRODUCT3, Expression, Local_list, Eq_list, nexp, lol, eql), \(x : ExpEqLoclst) let pe = @(GET_EXPR_PRE, e) in let expeqloclst = @(EXPR_EXPAND, pe, r, ndl) in
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS273
let exp = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst), lol = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst), eql = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst) in let nexp = @(MK_PRE, exp) in @(PRODUCT3, Expression, Local_list, Eq_list, nexp, lol, eql), \(x1 : ExpEqLoclst, x2 : ExpEqLoclst) let ei = @(GET_INIT_INIT, e), ef = @(GET_INIT_FOLLOW, e) in let expeqloclst1 = @(EXPR_EXPAND, ei, r, ndl), expeqloclst2 = @(EXPR_EXPAND, ef, r, ndl) in let exp1 = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst1), lol1 = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst1), eql1 = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst1), exp2 = @(PROJ1, Expression, Local_list, Eq_list,
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS274
expeqloclst2), lol2 = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst2), eql2 = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst2) in let nexp = @(MK_INIT, exp1, exp2), nlol = @(CONCAT, Local, lol1, lol2), neql = @(CONCAT, Equation, eql1, eql2) in @(PRODUCT3, Expression, Local_list, Eq_list, nexp, nlol, neql), \(x : ExpEqLoclst) let ce = @(GET_EXPR_CUR, e) in let expeqloclst = @(EXPR_EXPAND, ce, r, ndl) in let exp = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst), lol = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst), eql = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst) in let nexp = @(MK_CURRENT, exp) in @(PRODUCT3, Expression, Local_list, Eq_list, nexp, lol, eql),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS275
\(x1 : ExpEqLoclst, x2 : ExpEqLoclst) let we = @(GET_WHEN_EXPR, e), wc = @(GET_WHEN_COND, e) in let expeqloclst1 = @(EXPR_EXPAND, we, r, ndl), expeqloclst2 = @(EXPR_EXPAND, wc, r, ndl) in let exp1 = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst1), lol1 = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst1), eql1 = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst1), exp2 = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst2), lol2 = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst2), eql2 = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst2) in let nexp = @(MK_WHEN, exp1, exp2), nlol = @(CONCAT, Local, lol1, lol2), neql = @(CONCAT, Equation, eql1, eql2) in @(PRODUCT3, Expression, Local_list, Eq_list, nexp, nlol, neql), \(x1 : ExpEqLoclst, x2 : ExpEqLoclst, x3 : ExpEqLoclst) let ife = @(GET_IF_COND, e),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS276
the = @(GET_IF_THEN, e), ele = @(GET_IF_ELSE, e) in let expeqloclst1 = @(EXPR_EXPAND, ife, r, ndl), expeqloclst2 = @(EXPR_EXPAND, the, r, ndl), expeqloclst3 = @(EXPR_EXPAND, ele, r, ndl) in let exp1 = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst1), lol1 = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst1), eql1 = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst1), exp2 = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst2), lol2 = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst2), eql2 = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst2), exp3 = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst3), lol3 = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst3), eql3 = @(PROJ3, Expression,
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS277
Local_list, Eq_list, expeqloclst3) in let nexp = @(MK_IF, exp1, exp2, exp3), nlol = @(CONCAT, Local, lol1, @(CONCAT, Local, lol2, lol3)), neql = @(CONCAT, Equation, eql1, @(CONCAT, Equation, eql2, eql3)) in @(PRODUCT3, Expression, Local_list, Eq_list, nexp, nlol, neql));
11.9.4
Expansion of Expression List
def EXPR_LIST_EXPAND = \(el : Expr_list, r : TEnv, ndl : Node_Def_List) @(GIF_THEN_ELSE, ExpLstEqLoclst, @(IS_EMPTY, Expression, el), @(PRODUCT3, Expr_list, Local_list, Eq_list, @(NIL, Expression), @(NIL, Local), @(NIL, Equation)), let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let expeqloclst1 = @(EXPR_EXPAND, hel, r, ndl), expeqloclst2 = @(EXPR_LIST_EXPAND, tel, r, ndl) in let exp1 = @(PROJ1, Expression, Local_list, Eq_list, expeqloclst1), lol1 = @(PROJ2, Expression,
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS278
Local_list, Eq_list, expeqloclst1), eql1 = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst1), exp2 = @(PROJ1, Expr_list, Local_list, Eq_list, expeqloclst2), lol2 = @(PROJ2, Expr_list, Local_list, Eq_list, expeqloclst2), eql2 = @(PROJ3, Expr_list, Local_list, Eq_list, expeqloclst2) in let nexp = @(CONS, Expression, exp1, exp2), nlol = @(CONCAT, Local, lol1, lol2), neql = @(CONCAT, Equation, eql1, eql2) in @(PRODUCT3, Expr_list, Local_list, Eq_list, nexp, nlol, neql));
11.9.5
Expansion of Equation
dec EQ_EXPD : [Equation -> TEnv -> Node_Def_List -> EqLoclst]; def EQ_EXPD = \(eq : Equation, r : TEnv, ndl : Node_Def_List) @(eq, EqLoclst, \(il : Ident_list, e : Expression) let expeqloclst = @(EXPR_EXPAND, e, r, ndl) in let exp = @(PROJ1,
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS279
Expression, Local_list, Eq_list, expeqloclst), lol = @(PROJ2, Expression, Local_list, Eq_list, expeqloclst), eql = @(PROJ3, Expression, Local_list, Eq_list, expeqloclst) in let neq = @(MK_EQ, il, exp), neql = @(CONCAT, Equation, eql, @(CONS, Equation, neq, @(NIL, Equation))) in @(ANDS, Local_list, Eq_list, lol, neql));
11.9.6
Expansion of Equation List
dec EQ_EXPD_LIST : [Eq_list -> TEnv -> Node_Def_List -> EqLoclst]; def EQ_EXPD_LIST = \(eql : Eq_list, r : TEnv, ndl : Node_Def_List) @(GIF_THEN_ELSE, EqLoclst, @(IS_EMPTY, Equation, eql), @(ANDS, Local_list, Eq_list, @(NIL, Local), @(NIL, Equation)), let heql = @(HEAD, Equation, eql), teql = @(TAIL, Equation, eql) in let nheql = @(EQ_EXPD, heql, r, ndl), nteql = @(EQ_EXPD_LIST, teql, r, ndl) in let nlol1 = @(PJ1, Local_list, Eq_list, nheql), neql1 = @(PJ2, Local_list, Eq_list, nheql), nlol2 = @(PJ1, Local_list, Eq_list, nteql), neql2 = @(PJ2, Local_list, Eq_list, nteql) in @(ANDS, Local_list, Eq_list, @(CONCAT, Local, nlol1, nlol2), @(CONCAT, Equation, neql1, neql2)));
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS280
11.9.7
Expansion of Node
dec NODE_EXPAND : [Node_Def -> Node_Def_List -> Node_Def]; def NODE_EXPAND = \(nd : Node_Def, gndl : Node_Def_List) @(nd, Node_Def, \(hn : Header_Node, ll : Local_list, eqdl : Eq_list) let ipl = @(GET_HEAD_INLS, hn), rhn = @(THEADER_NODE, hn, EMPTY_TENV), rll = @(TLOCAL_LIST, ll, rhn) in let neqdlst = @(EQ_EXPD_LIST, eqdl, rll, gndl) in let nlocl = @(PJ1, Local_list, Eq_list, neqdlst), neqdl = @(PJ2, Local_list, Eq_list, neqdlst) in let nll = @(CONCAT, Local, ll, nlocl) in @(MK_NODE_DEF, hn, nll, neqdl));
11.10
Flating Node
The parallel equation (i1, i2, ..., in) = (e1, e2, ..., en); will become the sequential equations i1 = e1; i2 = e2; ... in = en;
dec DO_EQ_LIST_FLAT : [Ident_list -> Expression -> Eq_list]; dec EXPR_LIST_FLAT : [Ident_list -> Expr_list -> Eq_list]; def EXPR_LIST_FLAT = \(il : Ident_list, el : Expr_list) @(GIF_THEN_ELSE, Eq_list, @(IS_EMPTY, Identifier, il), @(GIF_THEN_ELSE, Eq_list, @(IS_EMPTY, Expression, el),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS281
@(NIL, Equation), @(NIL, Equation)), @(GIF_THEN_ELSE, Eq_list, @(IS_EMPTY, Expression, el), @(NIL, Equation), let hil = @(HEAD, Identifier, il), til = @(TAIL, Identifier, il), hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let neql = @(EXPR_LIST_FLAT, til, tel) in @(CONS, Equation, @(MK_EQ, @(CONS, Identifier, hil, @(NIL, Identifier)), hel), neql)));
11.10.1
Flating Expression
11.10.1.1
Flating Binary Expression
dec BINOP_EXPR_LIST_FLAT : [Binop -> Expr_list -> Expr_list -> Expr_list]; def BINOP_EXPR_LIST_FLAT = \(o : Binop, el1 : Expr_list, el2 : Expr_list) @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el1), @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el2), @(NIL, Expression), @(NIL, Expression)), @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el2), @(NIL, Expression), let hel1 = @(HEAD, Expression, el1), tel1 = @(TAIL, Expression, el1), hel2 = @(HEAD, Expression, el2), tel2 = @(TAIL, Expression, el2) in let nex = @(MK_BINOP, o, hel1, hel2), nel = @(BINOP_EXPR_LIST_FLAT, o, tel1, tel2) in
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS282
@(CONS, Expression, nex, nel)));
11.10.1.2
Flating Uninary Expression
dec UNOP_EXPR_LIST_FLAT : [Unop -> Expr_list -> Expr_list]; def UNOP_EXPR_LIST_FLAT = \(o : Unop, el : Expr_list) @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el), @(NIL, Expression), let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let ntel = @(UNOP_EXPR_LIST_FLAT, o, tel) in @(CONS, Expression, @(MK_UNOP, o, hel), ntel));
11.10.1.3
Flating Init-Expression
dec INIT_EXPR_LIST_FLAT : [Expr_list -> Expr_list -> Expr_list]; def INIT_EXPR_LIST_FLAT = \(el1 : Expr_list, el2 : Expr_list) @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el1), @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el2), @(NIL, Expression), @(NIL, Expression)), @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el2), @(NIL, Expression), let hel1 = @(HEAD, Expression, el1), tel1 = @(TAIL, Expression, el1), hel2 = @(HEAD, Expression, el2), tel2 = @(TAIL, Expression, el2) in let nex = @(MK_INIT, hel1, hel2), nel = @(INIT_EXPR_LIST_FLAT, tel1, tel2) in @(CONS, Expression, nex, nel)));
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS283
11.10.1.4
Flating Pre-Expression
dec PRE_EXPR_LIST_FLAT : [Expr_list -> Expr_list]; def PRE_EXPR_LIST_FLAT = \(el : Expr_list) @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el), @(NIL, Expression), let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let ntel = @(PRE_EXPR_LIST_FLAT, tel) in @(CONS, Expression, @(MK_PRE, hel), ntel));
11.10.1.5
Flating Current-Expression
dec CUR_EXPR_LIST_FLAT : [Expr_list -> Expr_list]; def CUR_EXPR_LIST_FLAT = \(el : Expr_list) @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el), @(NIL, Expression), let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let ntel = @(CUR_EXPR_LIST_FLAT, tel) in @(CONS, Expression, @(MK_CURRENT, hel), ntel));
11.10.1.6
Flating When-Expression
dec WHEN_EXPR_LIST_FLAT : [Expr_list -> Expr_list -> Expr_list]; def WHEN_EXPR_LIST_FLAT = \(el1 : Expr_list, el2 : Expr_list) @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el1), @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el2), @(NIL, Expression),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS284
@(NIL, Expression)), @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el2), @(NIL, Expression), let hel1 = @(HEAD, Expression, el1), tel1 = @(TAIL, Expression, el1), hel2 = @(HEAD, Expression, el2), tel2 = @(TAIL, Expression, el2) in let nex = @(MK_WHEN, hel1, hel2), nel = @(WHEN_EXPR_LIST_FLAT, tel1, tel2) in @(CONS, Expression, nex, nel)));
11.10.1.7
Flating If-Expression
dec IF_EXPR_LIST_FLAT : [Expr_list -> Expr_list -> Expr_list -> Expr_list]; def IF_EXPR_LIST_FLAT = \(el1 : Expr_list, el2 : Expr_list, el3 : Expr_list) @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el1), @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el2), @(NIL, Expression), @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el3), @(NIL, Expression), @(NIL, Expression))), @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el2), @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el3), @(NIL, Expression), @(NIL, Expression)), @(GIF_THEN_ELSE, Expr_list, @(IS_EMPTY, Expression, el3), @(NIL, Expression), let hel1 = @(HEAD, Expression, el1),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS285
tel1 = @(TAIL, Expression, el1), hel2 = @(HEAD, Expression, el2), tel2 = @(TAIL, Expression, el2), hel3 = @(HEAD, Expression, el3), tel3 = @(TAIL, Expression, el3) in let nex = @(MK_IF, hel1, hel2, hel3), nel = @(IF_EXPR_LIST_FLAT, tel1, tel2, tel3) in @(CONS, Expression, nex, nel))));
11.10.1.8
Flating Expression
dec EXPR_FLAT : [Expression -> Expr_list]; def EXPR_FLAT = \(e : Expression) @(e, Expr_list, \(b : Bool) @(CONS, Expression, e, @(NIL, Expression)), \(i : Identifier) @(CONS, Expression, e, @(NIL, Expression)), \(i : Int) @(CONS, Expression, e, @(NIL, Expression)), @(NIL, Expression), \(el : Expr_list) el, \(x : Expr_list, n : Nat) let pe = @(GET_POS_EXPR, e) in let npe = @(EXPR_FLAT, pe) in let fstnpe = @(HEAD, Expression, npe) in @(EXPR_FLAT, fstnpe), \(fn : Identifier, el : Expr_list) @(NIL, Expression), \(o : Binop, x1 : Expr_list, x2 : Expr_list) let le = @(GET_EXPR_BIN_LEXP, e), re = @(GET_EXPR_BIN_REXP, e) in let nle = @(EXPR_FLAT, le), nre = @(EXPR_FLAT, re) in @(BINOP_EXPR_LIST_FLAT, o, nle, nre), \(o : Unop, x : Expr_list) let ue = @(GET_EXPR_UNEXP, e) in let nue = @(EXPR_FLAT, ue) in @(UNOP_EXPR_LIST_FLAT, o, nue), \(x : Expr_list) let pe = @(GET_EXPR_PRE, e) in
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS286
let npe = @(EXPR_FLAT, pe) in @(PRE_EXPR_LIST_FLAT, npe), \(x1 : Expr_list, x2 : Expr_list) let ei = @(GET_INIT_INIT, e), ef = @(GET_INIT_FOLLOW, e) in let nei = @(EXPR_FLAT, ei), nef = @(EXPR_FLAT, ef) in @(INIT_EXPR_LIST_FLAT, nei, nef), \(x : Expr_list) let ce = @(GET_EXPR_CUR, e) in let nce = @(EXPR_FLAT, ce) in @(CUR_EXPR_LIST_FLAT, nce), \(x1 : Expr_list, x2 : Expr_list) let we = @(GET_WHEN_EXPR, e), wc = @(GET_WHEN_COND, e) in let nwe = @(EXPR_FLAT, we), nwc = @(EXPR_FLAT, wc) in @(WHEN_EXPR_LIST_FLAT, nwe, nwc), \(x1 : Expr_list, x2 : Expr_list, x3 : Expr_list) let ife = @(GET_IF_COND, e), the = @(GET_IF_THEN, e), ele = @(GET_IF_ELSE, e) in let nife = @(EXPR_FLAT, ife), nthe = @(EXPR_FLAT, the), nele = @(EXPR_FLAT, ele) in @(IF_EXPR_LIST_FLAT, nife, nthe, nele));
11.10.2
Flating Equation
def DO_EQ_LIST_FLAT = \(il : Ident_list, e : Expression) let el = @(EXPR_FLAT, e) in @(EXPR_LIST_FLAT, il, el); dec EQ_FLAT : [Equation -> Eq_list]; def EQ_FLAT = \(eq : Equation) @(eq, Eq_list, \(il : Ident_list, e : Expression) @(GIF_THEN_ELSE, Eq_list, @(IS_SINGLE_LIST, Identifier, il), @(CONS, Equation, eq, @(NIL, Equation)),
CHAPTER 11. THE SCADE/LUSTRE COMPILER - STATIC ANALYSIS287
@(DO_EQ_LIST_FLAT, il, e)));
11.10.3
Flating Equation List
dec EQ_FLAT_LIST : [Eq_list -> Eq_list]; def EQ_FLAT_LIST = \(eql : Eq_list) @(GIF_THEN_ELSE, Eq_list, @(IS_EMPTY, Equation, eql), @(NIL, Equation), let heql = @(HEAD, Equation, eql), teql = @(TAIL, Equation, eql) in let nheql = @(EQ_FLAT, heql), nteql = @(EQ_FLAT_LIST, teql) in @(CONCAT, Equation, nheql, nteql));
11.10.4
Flating Node Definition
dec NODE_FLAT : [Node_Def -> Node_Def]; def NODE_FLAT = \(nd : Node_Def) @(nd, Node_Def, \(hn : Header_Node, ll : Local_list, eql : Eq_list) let neql = @(EQ_FLAT_LIST, eql) in @(MK_NODE_DEF, hn, ll, neql));
Chapter 12
The SCADE/LUSTRE Compiler - Single-Loop Code There are several ways to translate a LUSTRE program into a program written in an imperative programming language such as ToyC. The simplest way is to translate a given LUSTRE program into a piece of semantically equivalent single-loop code in ToyC. Another approach is to translate it into a piece of semantically equivalent automata-like code. Both methods have the disadvantage that the code generated are very difficult to read and understand. When the execution of a final executable code is running to an error state, it will be almost impossible to trace back the source LUSTRE code to find out where and how the error come from. The source debugger from a LUSTRE program to a binary run-time code is hard to build based on these two approaches mentioned above if we cannot prove that it is not impossible. Another way is to treat each dataflow (stream) as a thread. This will be investigate in the future and will not be discussed in this report.
12.1
The Completeness, Soundness and Correctness of Translation
Given two programming languages, a source language and a target language, translation can be seen formally as a function that takes a “well formed sentence” of the source language and give a “well formed sentence” of the target language. If we want to model the translation, we have to provide a semantics for the source and the target languages. So there should be a source and a tar288
Single-Loop Code
289
get set of values and the respective semantics should translate a “well formed sentence” into a value. Using the L. Morris diagrams, we now describe three simplified criterions with respect to which we set our work. The formulation of these properties depends on the semantics to be deterministic, total, etc ... • Completeness - If a program of the source language evaluates, then its translation into the target language evaluates. In the following drawing, arrows correspond to hypotheses and dashed arrows correspond to what remains to be proved. LUSTRE terms
Compiler
LUSTRESemantics
/ ToyC terms ToyCSemantics
LUSTRE values
ToyC values
• Soundness - If the translation of a source program P evaluates, then the program P evaluates. LUSTRE terms
Compiler
LUSTRESemantics
/ ToyC terms ToyCSemantics
LUSTRE values
ToyC values
• Correctness - If a program and its translation evaluates w.r.t. the source and target semantics respectively, then they evaluate in the same value with respect to an hypothetical equivalence. LUSTRE terms
Compiler
LUSTRESemantics
LUSTRE values
/ ToyC terms ToyCSemantics
Equiv
/ ToyC values
We will not discuss the correctness of the compiler from LUSTRE to ToyC in this works. It will remain to be done in our future work.
Single-Loop Code
12.2
290
The Basic Approach
An obvious way of associating an imperative program with a LUSTRE node consists of constructing an infinite loop whose body implements the inputs to outputs transformation performed at any basic cycle of the node. Therefore LUSTRE can be quite naturally and easily translated into sequential imperative code, as a single endless loop initializations; loop acquire inputs; compute outputs; update memories end One just has to determine a correct order for computing outputs and memories, in order to minimize the number of memories (very often, there is no need of two memories to deal with x and pre(x)). The translation is done by • Choosing variables to be computed (the output ones and the least possible number of local ones, which implement either memories or temporary buffers); • Defining the actions which update these variables; • Choosing an ordering of these actions, according to the dependencies between variables induced by the network structure of the node. As an example, let us consider the following example: node WD4(set, reset, u_tps : bool; delay : int) returns (alarm : bool) var is_set : bool; remain : int; let alarm = is_set and (remain = 0) and pre(remain) > 0; is_set = false -> if set then true else if reset then false else pre(is_set); remain = 0 -> if set then delay else if u_tps and pre(remain) > 0 then pre(remain) - 1 else pre(remain);
Single-Loop Code
291
assert not(set and reset); tel.
The single-loop body, which is executed at each program reaction, looks like: loop if _init is_set remain alarm _init else is_set
then := false; := 0; := false; := false
:= if set then true else if reset then false end if remain := if set then delay else if reset then if u_tps and (_pre_remain > 0) then pre_remain - 1; end if end if; alarm := is_set and (remain = 0) and (_pre_remain > 0) end if; write(alarm); _pre_remain := remain end loop;
or, by factorizing the conditional: loop if _init then is_set := false; remain := 0; alarm := false; _init := false else if set then is_set := true; remain := delay else if reset then is_set := false
Single-Loop Code
292
end if; if u_tps and (_pre_remain > 0) then remain := _pre_remain - 1 end if end if alarm := is_set and (remain = 0) and (_pre_remain > 0) end if; write(alarm); _pre_remain := remain end loop;
12.3
Translation of Equations
12.3.1
Generation of Expressions
12.3.1.1
Translation of Variables
The translation will require two kind of auxiliary variables: • The variable init will be used to identify the initialization state. • For each variable x occurred in pre(x), the auxiliary variable pre x will be generated. These will be specified in PowerEpsilon as INIT IDENT and MK PRE IDENT. dec INIT_IDENT : Identifier; dec MK_PRE_IDENT : [Identifier -> Identifier];
The current i should be translated to if i == pre_i then pre_i else i; So the variable i in current i should have pre i generated. The following function is used to find the first identifier i in pre(i). dec CGEN_IDENT : [Expression -> Identifier]; def CGEN_IDENT = \(e : Expression) @(e, Identifier, \(b : Bool) ERR_IDENT, \(i : Identifier)
Single-Loop Code
293
@(MK_PRE_IDENT, i), \(i : Int) ERR_IDENT, ERR_IDENT, \(el : Expr_list) ERR_IDENT, \(x : Identifier, n : Nat) let pe = @(GET_POS_EXPR, e) in @(CGEN_IDENT, pe), \(f : Identifier, el : Expr_list) ERR_IDENT, \(o : Binop, x1 : Identifier, x2 : Identifier) ERR_IDENT, \(o : Unop, x :Identifier) ERR_IDENT, \(x : Identifier) ERR_IDENT, \(x1 : Identifier, x2 : Identifier) ERR_IDENT, \(x : Identifier) ERR_IDENT, \(x1 : Identifier, x2 : Identifier) ERR_IDENT, \(x1 : Identifier, x2 : Identifier, x3 : Identifier) ERR_IDENT);
12.3.1.2
Translation of Expressions
The following function is used to translate pre(i) as pre i and pre(i1+i2) as pre i1 + pre i2. dec CGEN_PRE_IDENT : [Expression -> TExpression]; dec CGEN_PRE_IDLST : [Expr_list -> TExpr_list]; def CGEN_PRE_IDLST = \(el : Expr_list) @(GIF_THEN_ELSE, TExpr_list, @(IS_EMPTY, Expression, el), @(NIL, TExpression), let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let thel = @(CGEN_PRE_IDENT, hel), ttel = @(CGEN_PRE_IDLST, tel) in @(CONS, TExpression, thel, ttel));
Single-Loop Code
def CGEN_PRE_IDENT = \(e : Expression) @(e, TExpression, \(b : Bool) @(BOOL_EXPR, b), \(i : Identifier) let pid = @(MK_PRE_IDENT, i) in @(IDEN_EXPR, pid), \(i : Int) @(INT_EXPR, i), NULL_EXPR, \(el : Expr_list) let tel = @(CGEN_PRE_IDLST, el) in @(TUPLE_EXPR, tel), \(x : TExpression, n : Nat) let pe = @(GET_POS_EXPR, e) in @(CGEN_PRE_IDENT, pe), \(f : Identifier, el : Expr_list) @(APPL_EXPR, f, @(CGEN_PRE_IDLST, el)), \(o : Binop, x1 : TExpression, x2 : TExpression) let le = @(GET_EXPR_BIN_LEXP, e), re = @(GET_EXPR_BIN_REXP, e) in let exp1 = @(CGEN_PRE_IDENT, le), exp2 = @(CGEN_PRE_IDENT, re) in @(BIN_EXPR, o, exp1, exp2), \(o : Unop, x : TExpression) let ue = @(GET_EXPR_UNEXP, e) in @(UN_EXPR, o, @(CGEN_PRE_IDENT, ue)), \(x : TExpression) let pe = @(GET_EXPR_PRE, e) in NULL_EXPR, \(x1 : TExpression, x2 : TExpression) let ei = @(GET_INIT_INIT, e), ef = @(GET_INIT_FOLLOW, e) in NULL_EXPR, \(x : TExpression) let ce = @(GET_EXPR_CUR, e) in NULL_EXPR, \(x1 : TExpression, x2 : TExpression) let we = @(GET_WHEN_EXPR, e), wc = @(GET_WHEN_COND, e) in let exp1 = @(CGEN_PRE_IDENT, we), exp2 = @(CGEN_PRE_IDENT, wc) in @(IFT_EXPR, exp2, exp1), \(x1 : TExpression, x2 : TExpression, x3 : TExpression)
294
Single-Loop Code
let ife = @(GET_IF_COND, e), the = @(GET_IF_THEN, e), ele = @(GET_IF_ELSE, e) in let exp1 = @(CGEN_PRE_IDENT, ife), exp2 = @(CGEN_PRE_IDENT, the), exp3 = @(CGEN_PRE_IDENT, ele) in @(IF_EXPR, exp1, exp2, exp3));
The following function is used to translate e in SCADE to e’ in ToyC. dec CGEN_EXPR2TEXPR : [Expression -> TExpression]; dec CGEN_ELST2TELST : [Expr_list -> TExpr_list]; def CGEN_ELST2TELST = \(el : Expr_list) @(GIF_THEN_ELSE, TExpr_list, @(IS_EMPTY, Expression, el), @(NIL, TExpression), let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let thel = @(CGEN_EXPR2TEXPR, hel), ttel = @(CGEN_ELST2TELST, tel) in @(CONS, TExpression, thel, ttel)); def CGEN_EXPR2TEXPR = \(e : Expression) @(e, TExpression, \(b : Bool) @(BOOL_EXPR, b), \(i : Identifier) @(IDEN_EXPR, i), \(i : Int) @(INT_EXPR, i), NULL_EXPR, \(el : Expr_list) let tel = @(CGEN_ELST2TELST, el) in @(TUPLE_EXPR, tel), \(x : TExpression, n : Nat) let pe = @(GET_POS_EXPR, e) in let te = @(CGEN_EXPR2TEXPR, pe) in @(PROJ_EXPR, te, n), \(f : Identifier, el : Expr_list) @(APPL_EXPR, f, @(CGEN_ELST2TELST, el)), \(o : Binop, x1 : TExpression, x2 : TExpression)
295
Single-Loop Code
296
let le = @(GET_EXPR_BIN_LEXP, e), re = @(GET_EXPR_BIN_REXP, e) in let exp1 = @(CGEN_EXPR2TEXPR, le), exp2 = @(CGEN_EXPR2TEXPR, re) in @(BIN_EXPR, o, exp1, exp2), \(o : Unop, x : TExpression) let ue = @(GET_EXPR_UNEXP, e) in @(UN_EXPR, o, @(CGEN_EXPR2TEXPR, ue)), \(x : TExpression) let pe = @(GET_EXPR_PRE, e) in @(CGEN_PRE_IDENT, pe), \(x1 : TExpression, x2 : TExpression) let ei = @(GET_INIT_INIT, e), ef = @(GET_INIT_FOLLOW, e) in NULL_EXPR, \(x : TExpression) let ce = @(GET_EXPR_CUR, e) in let id = @(CGEN_IDENT, ce) in let pid = @(MK_PRE_IDENT, id) in @(IF_EXPR, @(BIN_EXPR, EQ, @(IDEN_EXPR, pid), @(IDEN_EXPR, id)), @(IDEN_EXPR, pid), @(IDEN_EXPR, id)), \(x1 : TExpression, x2 : TExpression) let we = @(GET_WHEN_EXPR, e), wc = @(GET_WHEN_COND, e) in let exp1 = @(CGEN_EXPR2TEXPR, we), exp2 = @(CGEN_EXPR2TEXPR, wc) in @(IFT_EXPR, exp2, exp1), \(x1 : TExpression, x2 : TExpression, x3 : TExpression) let ife = @(GET_IF_COND, e), the = @(GET_IF_THEN, e), ele = @(GET_IF_ELSE, e) in let exp1 = @(CGEN_EXPR2TEXPR, ife), exp2 = @(CGEN_EXPR2TEXPR, the), exp3 = @(CGEN_EXPR2TEXPR, ele) in @(IF_EXPR, exp1, exp2, exp3));
The following function is used to split e1 -> e2 as e1 and to translate e to e’ in ToyC. dec CGEN_INIT_EXPR : [Expression -> TExpression]; def CGEN_INIT_EXPR = \(e : Expression) @(e, TExpression,
Single-Loop Code
\(b : Bool) @(BOOL_EXPR, b), \(i : Identifier) @(IDEN_EXPR, i), \(i : Int) @(INT_EXPR, i), NULL_EXPR, \(el : Expr_list) let tel = @(CGEN_ELST2TELST, el) in @(TUPLE_EXPR, tel), \(x : TExpression, n : Nat) let pe = @(GET_POS_EXPR, e) in let tpe = @(CGEN_INIT_EXPR, pe) in @(PROJ_EXPR, tpe, n), \(f : Identifier, el : Expr_list) @(APPL_EXPR, f, @(CGEN_PRE_IDLST, el)), \(o : Binop, x1 : TExpression, x2 : TExpression) let le = @(GET_EXPR_BIN_LEXP, e), re = @(GET_EXPR_BIN_REXP, e) in let exp1 = @(CGEN_INIT_EXPR, le), exp2 = @(CGEN_INIT_EXPR, re) in @(BIN_EXPR, o, exp1, exp2), \(o : Unop, x : TExpression) let ue = @(GET_EXPR_UNEXP, e) in @(UN_EXPR, o, @(CGEN_INIT_EXPR, ue)), \(x : TExpression) let pe = @(GET_EXPR_PRE, e) in @(CGEN_PRE_IDENT, pe), \(x1 : TExpression, x2 : TExpression) let ei = @(GET_INIT_INIT, e), ef = @(GET_INIT_FOLLOW, e) in @(CGEN_EXPR2TEXPR, ei), \(x : TExpression) let ce = @(GET_EXPR_CUR, e) in let id = @(CGEN_IDENT, ce) in let pid = @(MK_PRE_IDENT, id) in @(IF_EXPR, @(BIN_EXPR, EQ, @(IDEN_EXPR, pid), @(IDEN_EXPR, id)), @(IDEN_EXPR, pid), @(IDEN_EXPR, id)), \(x1 : TExpression, x2 : TExpression) let we = @(GET_WHEN_EXPR, e), wc = @(GET_WHEN_COND, e) in let exp1 = @(CGEN_INIT_EXPR, we), exp2 = @(CGEN_INIT_EXPR, wc) in @(IFT_EXPR, exp2, exp1), \(x1 : TExpression, x2 : TExpression, x3 : TExpression)
297
Single-Loop Code
298
let ife = @(GET_IF_COND, e), the = @(GET_IF_THEN, e), ele = @(GET_IF_ELSE, e) in let exp1 = @(CGEN_INIT_EXPR, ife), exp2 = @(CGEN_INIT_EXPR, the), exp3 = @(CGEN_INIT_EXPR, ele) in @(IF_EXPR, exp1, exp2, exp3));
12.3.2
Generation of Statements
dec CGEN_INIT : [Equation -> Statement]; def CGEN_INIT = \(eq : Equation) @(eq, Statement, \(il : Ident_list, e : Expression) let id = @(HEAD, Identifier, il), ex = @(CGEN_INIT_EXPR, e) in @(MK_ASSIGNMENT, id, ex)); dec CGEN_INIT_LIST : [Eq_list -> Statement]; def CGEN_INIT_LIST = \(eql : Eq_list) @(GIF_THEN_ELSE, Statement, @(IS_EMPTY, Equation, eql), MK_NULL_STAT, let heql = @(HEAD, Equation, eql), teql = @(TAIL, Equation, eql) in let st1 = @(CGEN_INIT, heql), st2 = @(CGEN_INIT_LIST, teql) in @(MK_SEQ, st1, st2));
The following function is used to split e1 -> e2 as e2, and to translate other e to e’ in ToyC. dec CGEN_FOLLOW_EXPR : [Expression -> TExpression]; def CGEN_FOLLOW_EXPR = \(e : Expression) @(e, TExpression, \(b : Bool)
Single-Loop Code
@(BOOL_EXPR, b), \(i : Identifier) @(IDEN_EXPR, i), \(i : Int) @(INT_EXPR, i), NULL_EXPR, \(el : Expr_list) let tel = @(CGEN_ELST2TELST, el) in @(TUPLE_EXPR, tel), \(x : TExpression, n : Nat) let pe = @(GET_POS_EXPR, e) in let tpe = @(CGEN_FOLLOW_EXPR, pe) in @(PROJ_EXPR, tpe, n), \(f : Identifier, el : Expr_list) @(APPL_EXPR, f, @(CGEN_PRE_IDLST, el)), \(o : Binop, x1 : TExpression, x2 : TExpression) let le = @(GET_EXPR_BIN_LEXP, e), re = @(GET_EXPR_BIN_REXP, e) in let exp1 = @(CGEN_FOLLOW_EXPR, le), exp2 = @(CGEN_FOLLOW_EXPR, re) in @(BIN_EXPR, o, exp1, exp2), \(o : Unop, x : TExpression) let ue = @(GET_EXPR_UNEXP, e) in @(UN_EXPR, o, @(CGEN_FOLLOW_EXPR, ue)), \(x : TExpression) let pe = @(GET_EXPR_PRE, e) in @(CGEN_PRE_IDENT, pe), \(x1 : TExpression, x2 : TExpression) let ei = @(GET_INIT_INIT, e), ef = @(GET_INIT_FOLLOW, e) in @(CGEN_EXPR2TEXPR, ef), \(x : TExpression) let ce = @(GET_EXPR_CUR, e) in let id = @(CGEN_IDENT, ce) in let pid = @(MK_PRE_IDENT, id) in @(IF_EXPR, @(BIN_EXPR, EQ, @(IDEN_EXPR, pid), @(IDEN_EXPR, id)), @(IDEN_EXPR, pid), @(IDEN_EXPR, id)), \(x1 : TExpression, x2 : TExpression) let we = @(GET_WHEN_EXPR, e), wc = @(GET_WHEN_COND, e) in let exp1 = @(CGEN_FOLLOW_EXPR, we), exp2 = @(CGEN_FOLLOW_EXPR, wc) in @(IFT_EXPR, exp2, exp1), \(x1 : TExpression, x2 : TExpression, x3 : TExpression) let ife = @(GET_IF_COND, e),
299
Single-Loop Code
the = @(GET_IF_THEN, e), ele = @(GET_IF_ELSE, e) in let exp1 = @(CGEN_FOLLOW_EXPR, ife), exp2 = @(CGEN_FOLLOW_EXPR, the), exp3 = @(CGEN_FOLLOW_EXPR, ele) in @(IF_EXPR, exp1, exp2, exp3)); dec CGEN_FOLLOW : [Equation -> Statement]; def CGEN_FOLLOW = \(eq : Equation) @(eq, Statement, \(il : Ident_list, e : Expression) let id = @(HEAD, Identifier, il), ex = @(CGEN_FOLLOW_EXPR, e) in @(MK_ASSIGNMENT, id, ex)); dec CGEN_FOLLOW_LIST : [Eq_list -> Statement]; def CGEN_FOLLOW_LIST = \(eql : Eq_list) @(GIF_THEN_ELSE, Statement, @(IS_EMPTY, Equation, eql), MK_NULL_STAT, let heql = @(HEAD, Equation, eql), teql = @(TAIL, Equation, eql) in let st1 = @(CGEN_FOLLOW, heql), st2 = @(CGEN_FOLLOW_LIST, teql) in @(MK_SEQ, st1, st2));
Input statements placed in the beginning of the loop body. dec CGEN_READ : [Input_par -> Statement]; def CGEN_READ = \(ip : Input_par) @(ip, Statement, \(i : Identifier, t : Types) @(MK_INPUT, i), \(i : Identifier, t : Types, w : Identifier) @(MK_INPUT, i)); dec CGEN_READ_LIST : [Input_par_list -> Statement];
300
Single-Loop Code
301
def CGEN_READ_LIST = \(ipl : Input_par_list) @(GIF_THEN_ELSE, Statement, @(IS_EMPTY, Input_par, ipl), MK_NULL_STAT, let hipl = @(HEAD, Input_par, ipl), tipl = @(TAIL, Input_par, ipl) in let st1 = @(CGEN_READ, hipl), st2 = @(CGEN_READ_LIST, tipl) in @(MK_SEQ, st1, st2));
Output statements placed in the end of the loop body. dec CGEN_WRITE : [Output -> Statement]; def CGEN_WRITE = \(op : Output) @(op, Statement, \(i : Identifier, t : Types) @(MK_OUTPUT, @(IDEN_EXPR, i)), \(i : Identifier, t : Types, w : Identifier) @(MK_OUTPUT, @(IDEN_EXPR, i))); dec CGEN_WRITE_LIST : [Output_list -> Statement]; def CGEN_WRITE_LIST = \(ol : Output_list) @(GIF_THEN_ELSE, Statement, @(IS_EMPTY, Output, ol), MK_NULL_STAT, let hol = @(HEAD, Output, ol), tol = @(TAIL, Output, ol) in let st1 = @(CGEN_WRITE, hol), st2 = @(CGEN_WRITE_LIST, tol) in @(MK_SEQ, st1, st2));
The following function is used to generate the statements to reset the variable such as pre i := i placed in the end of the loop body. dec CGEN_PRESET_LIST : [Declaration -> Statement]; def CGEN_PRESET_LIST = \(d : Declaration) @(d,
Single-Loop Code
302
Statement, MK_NULL_STAT, \(i : Identifier, t : Type_Elem) let pi = @(MK_PRE_IDENT, i) in @(MK_ASSIGNMENT, pi, @(IDEN_EXPR, i)), \(t1 : Statement, t2 : Statement) let d1 = @(LEFT_DECL, d), d2 = @(RIGHT_DECL, d) in @(MK_SEQ, @(CGEN_PRESET_LIST, d1), @(CGEN_PRESET_LIST, d2)));
Generating ToyC statements from SCADE equation list. dec CGEN_EQ_LIST : [Eq_list -> Input_par_list -> Output_list -> Declaration -> Statement]; def CGEN_EQ_LIST = \(eql : Eq_list, il : Input_par_list, ol : Output_list, d : Declaration) let st1 = @(CGEN_INIT_LIST, eql), st2 = @(CGEN_FOLLOW_LIST, eql) in let inputst = @(CGEN_READ_LIST, il), outputst = @(CGEN_WRITE_LIST, ol), mainst = @(MK_IFTHEL, @(IDEN_EXPR, INIT_IDENT), st1, st2), presetst = @(CGEN_PRESET_LIST, d) in @(MK_SEQ, inputst, @(MK_SEQ, mainst, @(MK_SEQ, outputst, presetst)));
12.4
Translation of Nodes
12.4.1
Computation Order and Single Loop
The simplest code we can generate from a program written in our language consists of a global infinite loop computing all the variables of the program, in suitable order. Generating pre i from i. dec MK_PRE_IDENT : [Identifier -> Identifier];
Single-Loop Code
303
12.4.2
Generation of Declarations
12.4.2.1
Generation of Input Declaration
Generating declarations for input variables. dec CGEN_INPUT : [Input_par -> Declaration]; def CGEN_INPUT = \(ip : Input_par) @(ip, Declaration, \(i : Identifier, t : Types) @(DECL_ELEM, i, t), \(i : Identifier, t : Types, w : Identifier) @(DECL_ELEM, i, t)); dec CGEN_INPUT_LIST : [Input_par_list -> Declaration]; def CGEN_INPUT_LIST = \(ipl : Input_par_list) @(GIF_THEN_ELSE, Declaration, @(IS_EMPTY, Input_par, ipl), NULL_DECL, let hipl = @(HEAD, Input_par, tipl = @(TAIL, Input_par, let dcl1 = @(CGEN_INPUT, dcl2 = @(CGEN_INPUT_LIST, @(DECL_SEQ, dcl1, dcl2));
12.4.2.2
ipl), ipl) in hipl), tipl) in
Generation of Output Declaration
Generating declarations for output variables. dec CGEN_OUTPUT : [Output -> Declaration]; def CGEN_OUTPUT = \(o : Output) @(o, Declaration, \(i : Identifier, t : Types) @(DECL_ELEM, i, t), \(i : Identifier, t : Types, w : Identifier) @(DECL_ELEM, i, t)); dec CGEN_OUTPUT_LIST : [Output_list -> Declaration];
Single-Loop Code
def CGEN_OUTPUT_LIST = \(ol : Output_list) @(GIF_THEN_ELSE, Declaration, @(IS_EMPTY, Output, ol), NULL_DECL, let hol = @(HEAD, Output, ol), tol = @(TAIL, Output, ol) in let dcl1 = @(CGEN_OUTPUT, hol), dcl2 = @(CGEN_OUTPUT_LIST, tol) in @(DECL_SEQ, dcl1, dcl2));
12.4.2.3
Generation of Local Declaration
Generating declarations for local variables. dec CGEN_LOCAL : [Local -> Declaration]; def CGEN_LOCAL = \(l : Local) @(l, Declaration, \(i : Identifier, t : Types) @(DECL_ELEM, i, t), \(i : Identifier, t : Types, w : Identifier) @(DECL_ELEM, i, t)); dec CGEN_LOCAL_LIST : [Local_list -> Declaration]; def CGEN_LOCAL_LIST = \(ll : Local_list) @(GIF_THEN_ELSE, Declaration, @(IS_EMPTY, Local, ll), NULL_DECL, let hll = @(HEAD, Local, ll), tll = @(TAIL, Local, ll) in let dcl1 = @(CGEN_LOCAL, hll), dcl2 = @(CGEN_LOCAL_LIST, tll) in @(DECL_SEQ, dcl1, dcl2));
12.4.2.4
Generation of Declaration for Auxiliary Variables
Generating declarations for auxiliary variables such as pre i.
304
Single-Loop Code
dec FIND_TYPE : [Identifier -> Local_list -> Type_Elem]; def FIND_TYPE = \(i : Identifier, ll : Local_list) @(GIF_THEN_ELSE, Type_Elem, @(IS_EMPTY, Local, ll), ERR_TYPE, let hll = @(HEAD, Local, ll), tll = @(TAIL, Local, ll) in @(hll, Type_Elem, \(v : Identifier, t : Types) @(GIF_THEN_ELSE, Type_Elem, @(EQ_IDENT, i, v), t, @(FIND_TYPE, i, tll)), \(v : Identifier, t : Types, w : Identifier) @(GIF_THEN_ELSE, Type_Elem, @(EQ_IDENT, i, v), t, @(FIND_TYPE, i, tll)))); dec CGEN_PRE_EXPR : [Expression -> Local_list -> Declaration]; dec CGEN_PRE_ELIST : [Expr_list -> Local_list -> Declaration]; def CGEN_PRE_ELIST = \(el : Expr_list, ll : Local_list) @(GIF_THEN_ELSE, Declaration, @(IS_EMPTY, Expression, el), NULL_DECL, let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let dcl1 = @(CGEN_PRE_EXPR, hel, ll), dcl2 = @(CGEN_PRE_ELIST, tel, ll) in @(DECL_SEQ, dcl1, dcl2)); dec CGEN_AUX_EXPR : [Expression -> Local_list -> Declaration]; dec CGEN_AUX_ELIST : [Expr_list -> Local_list -> Declaration]; def CGEN_AUX_ELIST =
305
Single-Loop Code
306
\(el : Expr_list, ll : Local_list) @(GIF_THEN_ELSE, Declaration, @(IS_EMPTY, Expression, el), NULL_DECL, let hel = @(HEAD, Expression, el), tel = @(TAIL, Expression, el) in let dcl1 = @(CGEN_AUX_EXPR, hel, ll), dcl2 = @(CGEN_AUX_ELIST, tel, ll) in @(DECL_SEQ, dcl1, dcl2));
Generation of declaration for variables such as pre i which are generated from the expressions pre(i) and current(i). def CGEN_PRE_EXPR = \(e : Expression, r : Local_list) @(e, Declaration, \(b : Bool) NULL_DECL, \(i : Identifier) let t = @(FIND_TYPE, i, r) in let pi = @(MK_PRE_IDENT, i) in @(DECL_ELEM, pi, t), \(i : Int) NULL_DECL, NULL_DECL, \(el : Expr_list) @(CGEN_PRE_ELIST, el, r), \(x : Declaration, n : Nat) let pe = @(GET_POS_EXPR, e) in @(CGEN_PRE_EXPR, pe, r), \(f : Identifier, el : Expr_list) @(CGEN_PRE_ELIST, el, r), \(o : Binop, x1 : Declaration, x2 : Declaration) let le = @(GET_EXPR_BIN_LEXP, e), re = @(GET_EXPR_BIN_REXP, e) in let dcl1 = @(CGEN_PRE_EXPR, le, r), dcl2 = @(CGEN_PRE_EXPR, re, r) in @(DECL_SEQ, dcl1, dcl2), \(o : Unop, x : Declaration) let ue = @(GET_EXPR_UNEXP, e) in @(CGEN_PRE_EXPR, ue, r), \(x : Declaration) let pe = @(GET_EXPR_PRE, e) in NULL_DECL, \(x1 : Declaration, x2 : Declaration)
Single-Loop Code
let ei = @(GET_INIT_INIT, e), ef = @(GET_INIT_FOLLOW, e) in let dcl1 = @(CGEN_PRE_EXPR, ei, r), dcl2 = @(CGEN_PRE_EXPR, ef, r) in @(DECL_SEQ, dcl1, dcl2), \(x : Declaration) let ce = @(GET_EXPR_CUR, e) in NULL_DECL, \(x1 : Declaration, x2 : Declaration) let we = @(GET_WHEN_EXPR, e), wc = @(GET_WHEN_COND, e) in let dcl1 = @(CGEN_PRE_EXPR, we, r), dcl2 = @(CGEN_PRE_EXPR, wc, r) in @(DECL_SEQ, dcl1, dcl2), \(x1 : Declaration, x2 : Declaration, x3 : Declaration) let ife = @(GET_IF_COND, e), the = @(GET_IF_THEN, e), ele = @(GET_IF_ELSE, e) in let dcl1 = @(CGEN_PRE_EXPR, ife, r), dcl2 = @(CGEN_PRE_EXPR, the, r), dcl3 = @(CGEN_PRE_EXPR, ele, r) in @(DECL_SEQ, dcl1, @(DECL_SEQ, dcl2, dcl3))); def CGEN_AUX_EXPR = \(e : Expression, r : Local_list) @(e, Declaration, \(b : Bool) NULL_DECL, \(i : Identifier) NULL_DECL, \(i : Int) NULL_DECL, NULL_DECL, \(el : Expr_list) @(CGEN_AUX_ELIST, el, r), \(x : Declaration, n : Nat) let pe = @(GET_POS_EXPR, e) in @(CGEN_AUX_EXPR, pe, r), \(f : Identifier, el : Expr_list) @(CGEN_AUX_ELIST, el, r), \(o : Binop, x1 : Declaration, x2 : Declaration) let le = @(GET_EXPR_BIN_LEXP, e), re = @(GET_EXPR_BIN_REXP, e) in let dcl1 = @(CGEN_AUX_EXPR, le, r), dcl2 = @(CGEN_AUX_EXPR, re, r) in @(DECL_SEQ, dcl1, dcl2),
307
Single-Loop Code
\(o : Unop, x : Declaration) let ue = @(GET_EXPR_UNEXP, e) in @(CGEN_AUX_EXPR, ue, r), \(x : Declaration) let pe = @(GET_EXPR_PRE, e) in @(CGEN_PRE_EXPR, pe, r), \(x1 : Declaration, x2 : Declaration) let ei = @(GET_INIT_INIT, e), ef = @(GET_INIT_FOLLOW, e) in let dcl1 = @(CGEN_AUX_EXPR, ei, r), dcl2 = @(CGEN_AUX_EXPR, ef, r) in @(DECL_SEQ, dcl1, dcl2), \(x : Declaration) let ce = @(GET_EXPR_CUR, e) in @(CGEN_PRE_EXPR, ce, r), \(x1 : Declaration, x2 : Declaration) let we = @(GET_WHEN_EXPR, e), wc = @(GET_WHEN_COND, e) in let dcl1 = @(CGEN_AUX_EXPR, we, r), dcl2 = @(CGEN_AUX_EXPR, wc, r) in @(DECL_SEQ, dcl1, dcl2), \(x1 : Declaration, x2 : Declaration, x3 : Declaration) let ife = @(GET_IF_COND, e), the = @(GET_IF_THEN, e), ele = @(GET_IF_ELSE, e) in let dcl1 = @(CGEN_AUX_EXPR, ife, r), dcl2 = @(CGEN_AUX_EXPR, the, r), dcl3 = @(CGEN_AUX_EXPR, ele, r) in @(DECL_SEQ, dcl1, @(DECL_SEQ, dcl2, dcl3))); dec CGEN_AUX : [Equation -> Local_list -> Declaration]; def CGEN_AUX = \(eq : Equation, r : Local_list) @(eq, Declaration, \(il : Ident_list, e : Expression) @(CGEN_AUX_EXPR, e, r)); dec CGEN_AUX_LIST : [Eq_list -> Local_list -> Declaration]; def CGEN_AUX_LIST = \(el : Eq_list, r : Local_list) @(GIF_THEN_ELSE, Declaration, @(IS_EMPTY, Equation, el), @(DECL_ELEM, INIT_IDENT, BOOL_TYPE),
308
Single-Loop Code
let hel = @(HEAD, Equation, tel = @(TAIL, Equation, let dcl1 = @(CGEN_AUX, dcl2 = @(CGEN_AUX_LIST, @(DECL_SEQ, dcl1, dcl2));
12.4.3
309
el), el) in hel, r), tel, r) in
Generation of Programs
Translation from nodes in SCADE to programs in ToyC. dec NodeCodeGen : [Node_Def -> Program]; def NodeCodeGen = \(nd : Node_Def) @(nd, Program, \(hn : Header_Node, ll : Local_list, eql : Eq_list) let nodename = @(GET_HEAD_NAME, hn), inparlst = @(GET_HEAD_INLS, hn), ouparlst = @(GET_HEAD_OULS, hn) in let rr = @(CONCAT, Local, ll, @(CONCAT, Local, inparlst, ouparlst)) in let dcl1 = @(CGEN_INPUT_LIST, inparlst), dcl2 = @(CGEN_OUTPUT_LIST, ouparlst), dcl3 = @(CGEN_LOCAL_LIST, ll), dcl4 = @(CGEN_AUX_LIST, eql, rr) in let dcl = @(DECL_SEQ, dcl1, @(DECL_SEQ, dcl2, @(DECL_SEQ, dcl3, dcl4))) in let st = @(CGEN_EQ_LIST, eql, inparlst, ouparlst, dcl4) in let nst = @(MK_WHILE, @(BOOL_EXPR, TT), st) in @(PROG, nodename, dcl, nst));
12.5
Remarks
• The compiler has defined auxiliary variables: the variable init – which is assumed to be initialized to true and is used to implement the operator -> – and the memory variable pre remain. Note that the expression pre(is set) did not result in the creation of a memory variable since the compiler found a way to avoid it.
Single-Loop Code
310
• Although it is easy to find an ordering of actions which meets the dependency relations between variables (static checks described above ensure that such a order exists), the choice of a “good” order is quite difficult: particularly, the order according to which conditional statements are opened and closed is critical with respect to code length. • The execution speed of code could be improved. Note for instance that at any cycle the program tests whether this is the first one or not, and this is particularly awkward. A solution consists of using more complex control structures than the single-loop structure. This is discussed in the following section.
Chapter 13
The SCADE/LUSTRE Compiler - Automaton-Like Code 13.1
The Basic Approach
The search for more complex control structures is borrowed from the compiling technique of ESTEREL and is based on the following remarks: • The classical concept of control of imperative programs is represented in LUSTRE by means of Boolean variables acting over conditional and clock handling operators. • If a condition or a clock depends on values of a Boolean variable calculated at previous cycles (by means of an expression like pre(B) or current(B)) the code of the actual cycle could be made simpler if that value could be assumed to be known. One could then distinguish the code to be executed according to that value. The synthesis of the control structure consists of choosing a set of state variables of Boolean type, whose values are expected to influence the code of future cycles. This set of variables is called the state of the program and it takes only finite set of values. For each possible value of the state, one defines the sequential code which would be executed during a cycle if the state of the variables had the above values just before the execution of the cycle. Hence, starting from a given state and executing the corresponding code would result in calculating the next state, and be ready for the execution of the next cycle. Finally, a static reachability analysis can be performed so as to delete state values and transitions which cannot be reached from the initial state (As a matter of fact, 311
Automaton-Like Code
312
this reachability analysis is done while generating state values and transitions, so as to avoid generating useless items). The result is a finite state automaton, whose transitions are labeled with the code of the corresponding reaction. This based on the work of [16] and [15]. State variables can be chosen in several ways among the following: • Boolean expressions resulting from pre and current operators. • Auxiliary variables line initC , associated with some clock C whose value is true at the first clock cycle and then false, and which allow the evaluation of -> operator. This control synthesis is illustrated on the watchdog example WD4: The chosen state variables are pre(is set) and init. Then: 1. The first cycle yields pre(is set) = nil and init = true. Let S0 be this initial state. Since init = true in this state, the value of all -> operators is the one of their first operand. Thus is set = false, and remain = 0. Elementary Boolean calculus yields alarm = false. Furthermore, since is set evaluates to false, this will be the value of pre(is set) at the next state. The next state, S1, is then pre(is set) = false and init = false. State S0 code looks like: S0 : remain := 0; alarm := false; pre_remain := remain; goto S1; 2. In state S1, since pre(init) value is false, is set evaluates to true if and only if the input set value is true. Let S2 be the state where pre(is set) is true and init is false. The code for state S1 is: S1 : if set then remain := delay; alarm := (remain = 0) and (pre_remain > 0); pre_remain := remain; goto S2; else remain := if u_tps and pre_remain > 0 then pre_remain - 1; else pre_remain; end if; alarm := false;
Automaton-Like Code
313
pre_remain := remain; goto S1; end if; 3. The code of state S2 (pre(is set) is true and init is false), is as follows: S2 : if set then remain := delay; alarm := (remain = 0) and (pre_remain > pre_remain := remain; goto S2; else if reset then remain := if u_tps and pre_remain > 0 pre_remain - 1; else pre_remain; end if; alarm := false; pre_remain := remain; goto S1; else remain := if u_tps and pre_remain > 0 pre_remain - 1; else pre_remain; end if; alarm := (remain = 0) and (pre_remain pre_remain := remain; goto S2; end if; end if;
0);
then
then
> 0);
All reachable states being processed, this ends the code generation. The following properties can be observed: • The obtained transition codes are much simpler than the single-loop code, particularly for S0 and S1 codes. This reduction may be even more impressive for larger programs. • In contrast, the overall length of the code may become very large. That is why, in practice, an action code table is built which uniquely identifies
Automaton-Like Code
314
actions that may belong to several transitions, and transition codes refer to actions by means of their indexes in the table. Boolean expressions depending on non Boolean variables, which are needed for evaluating state variables, are handled as inputs by means of tests on their value. • This technique allows assertions to be fully taken into account. Assertions are manipulated in the same way as state variables, and any branch yielding a false assertion is deleted. A state whose total code has been deleted is then declared unreachable, and branches already computed which lead to that state are recursively deleted. It should be noticed that assertions may increase the number of state variables and reachable states, as well as increase code length by introducing the extra tests. • In contrast to ESTEREL automata, the obtained LUSTRE automata are often far from being minimal.
13.2
Static State
13.2.1
Type Definition of DeltaState
The static state named DeltaState is represented as a list of pairs of identifier and value. def Item = @(And, Identifier, Value); def DeltaState = @(List, Item); dec IDEN_EQ : [Identifier -> Identifier -> Bool];
13.2.2
Constructors of DeltaState
Since the DeltaState is defined as a list of @(And, Identifier, Value), it has two normal constructors as usual. def EMPTY_DSTATE = @(NIL, Item); def ENTER_DSTATE = \(r : DeltaState, i : Identifier, v : Value) @(CONS, Item, @(ANDS, Identifier, Value, i, v));
13.2.3
Other Utilities of DeltaState
Other utilities include: the function UPDATE DSTATE to update the identifier i in r of type DeltaState with a new value v;
Automaton-Like Code
315
dec UPDATE_DSTATE : [DeltaState -> Identifier -> Value -> DeltaState]; def UPDATE_DSTATE = \(r : DeltaState, i : Identifier, v : Value) @(GIF_THEN_ELSE, DeltaState, @(IS_EMPTY, Item, r), @(NIL, Item), let hr = @(HEAD, Item, r), tr = @(TAIL, Item, r) in let id = @(PJ1, Identifier, Value, hr), iv = @(PJ2, Identifier, Value, hr) in @(GIF_THEN_ELSE, DeltaState, @(IDEN_EQ, i, id), let nhr = @(ANDS, Identifier, Value, i, v) in @(CONS, Item, nhr, tr), @(CONS, Item, hr, @(UPDATE_DSTATE, tr, i, v))));
the function COMP DSTATE which concatenates r1 and r2 of type DeltaState together; def COMP_DSTATE = \(r1 : DeltaState, r2 : DeltaState) @(CONCAT, Item, r1, r2);
and finally the function FIND DSTATE which finds the value of a given identifier i from a DeltaState r. dec FIND_DSTATE : [DeltaState -> Identifier -> Value]; def FIND_DSTATE = \(r : DeltaState, i : Identifier) @(GIF_THEN_ELSE, Value, @(IS_EMPTY, Item, r), ERR_VAL, let hr = @(HEAD, Item, r), tr = @(TAIL, Item, r) in let id = @(PJ1, Identifier, Value, hr), iv = @(PJ2, Identifier, Value, hr) in @(GIF_THEN_ELSE, Value, @(IDEN_EQ, i, id), iv, @(FIND_DSTATE, tr, i)));
Automaton-Like Code
13.2.4
316
Predicate Functions of DeltaState
The predicate function IS IN DSTATE tests whether a identifier i defined in r of type DeltaState or not. dec IS_IN_DSTATE : [DeltaState -> Identifier -> Bool]; def IS_IN_DSTATE = \(r : DeltaState, i : Identifier) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Item, r), FF, let hr = @(HEAD, Item, r), tr = @(TAIL, Item, r) in let id = @(PJ1, Identifier, Value, hr), iv = @(PJ2, Identifier, Value, hr) in @(GIF_THEN_ELSE, Bool, @(IDEN_EQ, i, id), TT, @(IS_IN_DSTATE, tr, i)));
Two static state s1 and s2 are matched if and only if they are completely identical both on their ordering and mapping values. This is defined by MATCH DSTATE. dec MATCH_DSTATE : [DeltaState -> DeltaState -> Bool]; def MATCH_DSTATE = \(r1 : DeltaState, r2 : DeltaState) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Item, r1), @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Item, r2), TT, FF), @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Item, r2), FF, let hr1 = @(HEAD, Item, r1), tr1 = @(TAIL, Item, r1), hr2 = @(HEAD, Item, r2), tr2 = @(TAIL, Item, r2) in
Automaton-Like Code
317
let id1 = @(PJ1, Identifier, Value, hr1), iv1 = @(PJ2, Identifier, Value, hr1), id2 = @(PJ1, Identifier, Value, hr2), iv2 = @(PJ2, Identifier, Value, hr2) in @(GIF_THEN_ELSE, Bool, @(AND, @(IDEN_EQ, id1, id2), @(EQ_VAL, iv1, iv2)), @(MATCH_DSTATE, tr1, tr2), FF)));
13.3
State and State Code in Automata
13.3.1
Definition of State in Automata
A state of the automata is a pair of DeltaState and Statement. def DState = @(And, DeltaState, Statement); def DStateList = @(List, DState);
13.3.2
Generation of Automata from Static State
The function APPEND ITEM appends a new state into an automata dl with identifier id and binding value ERR VAL, TT and FF. dec APPEND_ITEM : [DStateList -> Identifier -> DStateList]; def APPEND_ITEM = \(dl : DStateList, id : Identifier) @(GIF_THEN_ELSE, DStateList, @(IS_EMPTY, DState, dl), let nds0 = @(CONS, Item, @(ANDS, Identifier, Value, id, ERR_VAL), @(NIL, Item)), nds1 = @(CONS, Item, @(ANDS, Identifier, Value, id, @(BVAL, TT)), @(NIL, Item)), nds2 = @(CONS, Item, @(ANDS, Identifier, Value, id, @(BVAL, FF)), @(NIL, Item)) in let nhdl0 = @(ANDS, DeltaState, Statement, nds0, MK_NULL_STAT), nhdl1 = @(ANDS, DeltaState, Statement, nds1, MK_NULL_STAT), nhdl2 = @(ANDS, DeltaState, Statement, nds2, MK_NULL_STAT) in
Automaton-Like Code
318
@(CONS, DState, nhdl0, @(CONS, DState, nhdl1, @(CONS, DState, nhdl2, @(NIL, DState)))), let hdl = @(HEAD, DState, dl), tdl = @(TAIL, DState, dl) in let ds = @(PJ1, DeltaState, Statement, hdl), dz = @(PJ2, DeltaState, Statement, hdl) in let ntdl = @(APPEND_ITEM, tdl, id) in let nds0 = @(CONS, Item, @(ANDS, Identifier, Value, id, ERR_VAL), ds), nds1 = @(CONS, Item, @(ANDS, Identifier, Value, id, @(BVAL, TT)), ds), nds2 = @(CONS, Item, @(ANDS, Identifier, Value, id, @(BVAL, FF)), ds) in let nhdl0 = @(ANDS, DeltaState, Statement, nds0, dz), nhdl1 = @(ANDS, DeltaState, Statement, nds1, dz), nhdl2 = @(ANDS, DeltaState, Statement, nds2, dz) in @(CONS, DState, nhdl0, @(CONS, DState, nhdl1, @(CONS, DState, nhdl2, ntdl))));
The function GEN DSTATE LIST is used to generate an automata with all possible state value and statement initilized as MK NULL STAT. dec GEN_DSTATE_LIST : [DeltaState -> DStateList]; def GEN_DSTATE_LIST = \(z : DeltaState) @(GIF_THEN_ELSE, DStateList, @(IS_EMPTY, Item, z), @(NIL, DState), let hz = @(HEAD, Item, z), tz = @(TAIL, Item, z) in let id = @(PJ1, Identifier, Value, hz), iv = @(PJ2, Identifier, Value, hz) in
Automaton-Like Code
319
let dslst = @(GEN_DSTATE_LIST, tz) in @(APPEND_ITEM, dslst, id));
13.3.3
Erasing Redundant State in Automata
Since the function GEN DSTATE LIST generates an automata with all possible state value and statement initilized as MK NULL STAT, we need then to erase all the redundant states in this automata. As an example, let us consider again the following version of the watchdog program WD4: node WD4(set, reset, u_tps : bool; delay : int) return (alarm : bool); var is_set : bool; remain : int; let alarm = is_set and (remain = 0) and pre(remain) > 0; is_set = false -> if set then true else if reset then false else pre(is_set); remain = 0 -> if set then delay else if u_tps and pre(remain) > 0 then pre(remain)-1 else pre(remain); assert = not(set and reset); tel. The single-loop body, which is executed at each program reaction, looks like if _init then % first cycle % is_set := false; remain := 0; alarm := false; _init := false else % other cycle % if set then is_set := true; remain := delay else
Automaton-Like Code
320
if reset then is_set := false endif; if u_tps and (_pre_remain > 0) then remain := _pre_remain - 1 endif; endif; alarm := is_set and (remain = 0) and (_pre_remain > 0); endif; write(alarm); _pre_remain := remain;
Variable pre is set init
NIL NIL
NIL T
NIL F
T NIL
T T
T F
F NIL
F T
Table 13.1: Variable Assignment 1 init will 1. Have no NIL value (the initial value will always be defined). 2. Always starts from T and is followed by F. by 1: by 2: pre is set will Variable pre is set init
NIL T
NIL F
T T
T F
F T
Table 13.2: Variable Assignment 2
Variable pre is set init
NIL T
NIL F
T F
F F
Table 13.3: Variable Assignment 3
F F
F F
Automaton-Like Code
321
1. always starts from NIL and is followed by none NIL value. by 1: With NIL (pre has no initial value pre(X)). The total number of states Variable pre is set init
NIL T
T F
F F
Table 13.4: Variable Assignment 4 will be 3n . In [15], another version of LUSTRE was presented in which the pre operator have two operands: a variable and an initial value. In this case, we may assume that all the variables in the language will be initialized. It means that all the variables in DeltaState will be defined without NIL (pre has initial value pre(X, false) or pre(X, true)). The total number of states will then be 2n . 13.3.3.1
Erasing Assignment of init with NIL
The following functions are used to erase the assignment of variable init with NIL. dec IS_INIT_IDENT : [Identifier -> Bool]; dec INIT_NIL_ITMLIST : [DeltaState -> Bool]; def INIT_NIL_ITMLIST = \(itml : DeltaState) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, Item, itml), FF, let hitml = @(HEAD, Item, itml), titml = @(TAIL, Item, itml) in let i = @(PJ1, Identifier, Value, hitml), v = @(PJ2, Identifier, Value, hitml) in @(GIF_THEN_ELSE, Bool, @(AND, @(IS_INIT_IDENT, i), @(IS_ERR_VAL, v)), TT, @(INIT_NIL_ITMLIST, titml))); dec IS_INIT_NIL : [DState -> Bool]; def IS_INIT_NIL =
Automaton-Like Code
\(z : DState) let ds = @(PJ1, DeltaState, Statement, z), dz = @(PJ2, DeltaState, Statement, z) in @(INIT_NIL_ITMLIST, ds); dec INIT_NIL_FILTER : [DStateList -> DStateList]; def INIT_NIL_FILTER = \(dsl : DStateList) @(GIF_THEN_ELSE, DStateList, @(IS_EMPTY, DState, dsl), @(NIL, DState), let hdsl = @(HEAD, DState, dsl), tdsl = @(TAIL, DState, dsl) in let ntdsl = @(INIT_NIL_FILTER, tdsl) in @(GIF_THEN_ELSE, DStateList, @(IS_INIT_NIL, hdsl), ntdsl, @(CONS, DState, hdsl, ntdsl))); dec INIT_IDENT : Identifier; dec IS_INIT_TT : [DState -> Bool]; def IS_INIT_TT = \(z : DState) let ds = @(PJ1, DeltaState, Statement, z), dz = @(PJ2, DeltaState, Statement, z) in let v = @(FIND_DSTATE, ds, INIT_IDENT) in @(GIF_THEN_ELSE, Bool, @(IS_BVAL, v), @(GET_BVAL, v), FF);
13.3.3.2
Erasing Assignment of init with F after T
dec INIT_TT_FILTOUT : [DStateList -> DStateList]; def INIT_TT_FILTOUT = \(dsl : DStateList) @(GIF_THEN_ELSE, DStateList,
322
Automaton-Like Code
323
@(IS_EMPTY, DState, dsl), @(NIL, DState), let hdsl = @(HEAD, DState, dsl), tdsl = @(TAIL, DState, dsl) in let ntdsl = @(INIT_TT_FILTOUT, tdsl) in @(GIF_THEN_ELSE, DStateList, @(IS_INIT_TT, hdsl), ntdsl, @(CONS, DState, hdsl, ntdsl))); dec INIT_TT_FILTER : [DStateList -> DStateList]; def INIT_TT_FILTER = \(dsl : DStateList) @(GIF_THEN_ELSE, DStateList, @(IS_EMPTY, DState, dsl), @(NIL, DState), let hdsl = @(HEAD, DState, dsl), tdsl = @(TAIL, DState, dsl) in @(GIF_THEN_ELSE, DStateList, @(IS_INIT_TT, hdsl), @(INIT_TT_FILTOUT, tdsl), @(CONS, DState, hdsl, @(INIT_TT_FILTER, tdsl)))); dec INIT_FILTER : [DStateList -> DStateList]; def INIT_FILTER = \(dsl : DStateList) let dsl1 = @(INIT_NIL_FILTER, dsl), dsl2 = @(INIT_TT_FILTER, dsl1) in dsl2;
13.3.3.3
Erasing Assignment of pre with non NIL value after NIL
dec IS_PRE_IDENT : [Identifier -> Bool]; dec PRE_IDENT_NIL : [DState -> Identifier -> Bool]; def PRE_IDENT_NIL = \(z : DState, i : Identifier) let ds = @(PJ1, DeltaState, Statement, z), dz = @(PJ2, DeltaState, Statement, z) in
Automaton-Like Code
let v = @(FIND_DSTATE, ds, i) in @(IS_ERR_VAL, v); dec PRE_IDEN_FILTOUT : [DStateList -> Identifier -> DStateList]; dec PRE_IDEN_FILTER : [DStateList -> Identifier -> DStateList]; def PRE_IDEN_FILTER = \(dsl : DStateList, i : Identifier) @(GIF_THEN_ELSE, DStateList, @(IS_EMPTY, DState, dsl), @(NIL, DState), let hdsl = @(HEAD, DState, dsl), tdsl = @(TAIL, DState, dsl) in @(GIF_THEN_ELSE, DStateList, @(PRE_IDENT_NIL, hdsl, i), @(PRE_IDEN_FILTOUT, tdsl, i), @(CONS, DState, hdsl, @(PRE_IDEN_FILTER, tdsl, i)))); dec PRE_FILTER : [DStateList -> DeltaState -> DStateList]; def PRE_FILTER = \(dsl : DStateList, z : DeltaState) @(GIF_THEN_ELSE, DStateList, @(IS_EMPTY, Item, z), @(NIL, DState), let hz = @(HEAD, Item, z), tz = @(TAIL, Item, z) in let i = @(PJ1, Identifier, Value, hz), v = @(PJ2, Identifier, Value, hz) in @(GIF_THEN_ELSE, DStateList, @(IS_PRE_IDENT, i), @(PRE_IDEN_FILTER, dsl, i), @(PRE_FILTER, dsl, tz)));
13.4
Construction of Automata
13.4.1
Value for Partial Evaluation
The result of partial evaluation is either a value or an expression.
324
Automaton-Like Code
13.4.1.1
Type Definition of PEVal
def PEVal = @(Or, Value, TExpression);
13.4.1.2
Constructors of PEVal
def MK_PEVAL = @(INJ1, Value, TExpression); def MK_PEEXPR = @(INJ2, Value, TExpression); dec ERR_PEVAL : PEVal;
13.4.1.3
Predicate Functions of PEVal
dec EQ_PEVAL : [PEVal -> PEVal -> Bool]; dec IS_PEVAL : [PEVal -> Bool];
13.4.1.4
Projectors of PEVal
dec GET_PEVAL : [PEVal -> Value];
13.4.2
Partial Evaluation
13.4.2.1
Partial Evaluation of Operations
dec PEOPERATOR : [Binop -> PEVal -> PEVal -> PEVal]; dec PEUNOPERATOR : [Unop -> PEVal -> PEVal]; def PEUNOPERATOR = \(o : Unop, v : PEVal) @(o, PEVal, @(WHEN, Value, TExpression, PEVal, v, \(x : Value)
325
Automaton-Like Code
@(GIF_THEN_ELSE, PEVal, @(IS_BVAL, x), let bv = @(GET_BVAL, x) in @(MK_PEVAL, @(BVAL, @(NOT, bv))), @(MK_PEVAL, ERR_VAL)), \(y : TExpression) @(MK_PEEXPR, @(UN_EXPR, o, y))), @(WHEN, Value, TExpression, PEVal, v, \(x : Value) @(GIF_THEN_ELSE, PEVal, @(IS_IVAL, x), let iv = @(GET_IVAL, x) in @(MK_PEVAL, @(IVAL, @(INEG, iv))), @(MK_PEVAL, ERR_VAL)), \(y : TExpression) @(MK_PEEXPR, @(UN_EXPR, o, y))));
13.4.2.2
Partial Evaluation for TExpression
Partial evaluation of expressions is recursively defined on TExpression. dec PEEXPR : [TExpression -> DeltaState -> PEVal]; def PEEXPR = \(e : TExpression, r : DeltaState) @(e, PEVal, @(MK_PEEXPR, e), \(b : Bool) @(MK_PEVAL, @(BVAL, b)), \(i : Int) @(MK_PEEXPR, e), \(i : Identifier) let v = @(FIND_DSTATE, r, i) in @(GIF_THEN_ELSE, PEVal, @(IS_ERR_VAL, v), @(MK_PEEXPR, e), @(MK_PEVAL, v)), \(o : Binop, x1 : PEVal, x2 : PEVal) let e1 = @(GET_EXPR_LEFT, e), e2 = @(GET_EXPR_RIGHT, e) in
326
Automaton-Like Code
let pv1 = @(PEEXPR, e1, r), pv2 = @(PEEXPR, e2, r) in @(PEOPERATOR, o, pv1, pv2), \(o : Unop, x : PEVal) let ne = @(GET_EXPR_UN, e) in let pv = @(PEEXPR, ne, r) in @(PEUNOPERATOR, o, pv), \(l : TExpr_list) @(MK_PEEXPR, e), \(x : PEVal, n : Nat) @(MK_PEEXPR, e), \(x1 : PEVal, x2 : PEVal) let e1 = @(GET_EXPR_IFT, e), e2 = @(GET_EXPR_THT, e) in let pv = @(PEEXPR, e1, r), nb = @(PEEXPR, e2, r) in @(GIF_THEN_ELSE, PEVal, @(IS_PEVAL, pv), let v = @(GET_PEVAL, pv) in @(GIF_THEN_ELSE, PEVal, @(IS_BVAL, v), let bv = @(GET_BVAL, v) in @(GIF_THEN_ELSE, PEVal, bv, nb, @(MK_PEEXPR, e)), @(MK_PEEXPR, e)), @(MK_PEEXPR, e)), \(x1 : PEVal, x2 : PEVal, x3 : PEVal) let e1 = @(GET_EXPR_IF, e), e2 = @(GET_EXPR_TH, e), e3 = @(GET_EXPR_EL, e) in let pv = @(PEEXPR, e1, r), nb1 = @(PEEXPR, e2, r), nb2 = @(PEEXPR, e3, r) in @(GIF_THEN_ELSE, PEVal, @(IS_PEVAL, pv), let v = @(GET_PEVAL, pv) in @(GIF_THEN_ELSE, PEVal, @(IS_BVAL, v), let bv = @(GET_BVAL, v) in @(GIF_THEN_ELSE,
327
Automaton-Like Code
PEVal, bv, nb1, nb2), @(MK_PEEXPR, e)), @(MK_PEEXPR, e)));
13.4.2.3
Partial Evaluation for Statement
Partial evaluation of statements is recursively defined on Statement. dec PESTAT : [Statement -> DeltaState -> Statement]; def PESTAT = \(s : Statement, r : DeltaState) @(s, Statement, %-- Null Statement --% s, %-- Assignment Statement --% \(i : Identifier, e : TExpression) s, %-- If-Then Statement --% \(e : TExpression, x : Statement) let b = @(GET_IFTHEN_STAT, s) in let nb = @(PESTAT, b, r) in let pv = @(PEEXPR, e, r) in @(GIF_THEN_ELSE, Statement, @(IS_PEVAL, pv), let v = @(GET_PEVAL, pv) in @(GIF_THEN_ELSE, Statement, @(IS_BVAL, v), let bv = @(GET_BVAL, v) in @(GIF_THEN_ELSE, Statement, bv, nb, @(MK_IFTHEN, e, nb)), @(MK_IFTHEN, e, nb)), @(MK_IFTHEN, e, nb)), %-- If-Then-Else Statement --% \(e : TExpression, x1 : Statement, x2 : Statement) let b1 = @(GET_IFTHEL_THEN, s), b2 = @(GET_IFTHEL_ELSE, s) in
328
Automaton-Like Code
let nb1 = @(PESTAT, b1, r), nb2 = @(PESTAT, b2, r) in let pv = @(PEEXPR, e, r) in @(GIF_THEN_ELSE, Statement, @(IS_PEVAL, pv), let v = @(GET_PEVAL, pv) in @(GIF_THEN_ELSE, Statement, @(IS_BVAL, v), let bv = @(GET_BVAL, v) in @(GIF_THEN_ELSE, Statement, bv, nb1, nb2), @(MK_IFTHEL, e, nb1, nb2)), @(MK_IFTHEL, e, nb1, nb2)), %-- While-Loop Statement --% \(e : TExpression, x : Statement) let b = @(GET_WHILE_STAT, s) in let nb = @(PESTAT, b, r) in @(MK_WHILE, e, nb), %-- Sequence Statement --% \(x1 : Statement, x2 : Statement) let b1 = @(GET_SEQ_HEAD, s), b2 = @(GET_SEQ_TAIL, s) in let nb1 = @(PESTAT, b1, r), nb2 = @(PESTAT, b2, r) in @(MK_SEQ, nb1, nb2), %-- Output Statement --% \(e : TExpression) s, %-- Input Statement --% \(i : Identifier) s, %-- Label Statement --% \(l : Identifier, x : Statement) let b = @(GET_LABEL_STAT, s) in let nb = @(PESTAT, b, r) in @(MK_LABEL, l, nb), %-- Goto Statement --% \(l : Identifier) s);
329
Automaton-Like Code
13.4.3
330
Generation of Automata from Initial State and Code
dec IS_IF_INSIDE : [Statement -> Bool]; def IS_IF_INSIDE = \(s : Statement) @(s, Bool, %-- Null Statement --% FF, %-- Assignment Statement --% \(i : Identifier, e : TExpression) FF, %-- If-Then Statement --% \(e : TExpression, x : Bool) TT, %-- If-Then-Else Statement --% \(e : TExpression, x1 : Bool, x2 : Bool) TT, %-- While-Loop Statement --% \(e : TExpression, x : Bool) FF, %-- Sequence Statement --% \(x1 : Bool, x2 : Bool) FF, %-- Output Statement --% \(e : TExpression) FF, %-- Input Statement --% \(i : Identifier) FF, %-- Label Statement --% \(l : Identifier, x : Bool) FF, %-- Goto Statement --% \(l : Identifier) FF); dec NUMB2IDENT : [Nat -> Identifier]; dec MATCH_DSTATE : [DeltaState -> DeltaState -> Bool]; dec DCAL_BLCK_LABEL : [Nat -> DStateList -> DeltaState -> Nat]; def DCAL_BLCK_LABEL = \(n : Nat, dsl : DStateList, ds : DeltaState)
Automaton-Like Code
@(GIF_THEN_ELSE, Nat, @(IS_EMPTY, DState, dsl), ERR_NAT, let hdsl = @(HEAD, DState, dsl), tdsl = @(TAIL, DState, dsl) in let r = @(PJ1, DeltaState, Statement, hdsl), z = @(PJ2, DeltaState, Statement, hdsl) in @(GIF_THEN_ELSE, Nat, @(MATCH_DSTATE, r, ds), n, @(DCAL_BLCK_LABEL, @(SS, n), tdsl, ds))); dec CAL_BLOCK_LABEL : [DStateList -> DeltaState -> Identifier]; def CAL_BLOCK_LABEL = \(dsl : DStateList, ds : DeltaState) let n = @(DCAL_BLCK_LABEL, OO, dsl, ds) in @(NUMB2IDENT, n); dec AUTOPESTAT : [Statement -> DeltaState -> DStateList -> @(And, DeltaState, Statement)]; def AUTOPESTAT = \(s : Statement, r : DeltaState, x : DStateList) @(s, @(And, DeltaState, Statement), %-- Null Statement --% @(ANDS, DeltaState, Statement, r, s), %-- Assignment Statement --% \(i : Identifier, e : TExpression) @(GIF_THEN_ELSE, @(And, DeltaState, Statement), @(IS_IN_DSTATE, r, i), let ve = @(PEEXPR, e, r) in @(GIF_THEN_ELSE, @(And, DeltaState, Statement), @(IS_PEVAL, ve), let bv = @(GET_PEVAL, ve) in let nr = @(UPDATE_DSTATE, r, i, bv) in @(ANDS, DeltaState, Statement, nr, MK_NULL_STAT), @(ANDS, DeltaState, Statement, r, s)), @(ANDS, DeltaState, Statement, r, s)),
331
Automaton-Like Code
%-- If-Then Statement --% \(e : TExpression, x0 : @(And, DeltaState, Statement)) let b = @(GET_IFTHEN_STAT, s) in let nrb = @(AUTOPESTAT, b, r, x) in let nr = @(PJ1, DeltaState, Statement, nrb), nb = @(PJ2, DeltaState, Statement, nrb) in let nbs = @(GIF_THEN_ELSE, Statement, @(IS_IF_INSIDE, nb), nb, let label = @(CAL_BLOCK_LABEL, x, nr) in @(MK_SEQ, nb, @(MK_GOTO, label))) in let pv = @(PEEXPR, e, r) in @(GIF_THEN_ELSE, @(And, DeltaState, Statement), @(IS_PEVAL, pv), let v = @(GET_PEVAL, pv) in @(GIF_THEN_ELSE, @(And, DeltaState, Statement), @(IS_BVAL, v), let bv = @(GET_BVAL, v) in @(GIF_THEN_ELSE, @(And, DeltaState, Statement), bv, @(ANDS, DeltaState, Statement, nr, nbs), @(ANDS, DeltaState, Statement, r, @(MK_IFTHEN, e, nbs))), @(ANDS, DeltaState, Statement, r, @(MK_IFTHEN, e, nbs))), @(ANDS, DeltaState, Statement, r, @(MK_IFTHEN, e, nb))), %-- If-Then-Else Statement --% \(e : TExpression, x1 : @(And, DeltaState, Statement), x2 : @(And, DeltaState, Statement)) let b1 = @(GET_IFTHEL_THEN, s), b2 = @(GET_IFTHEL_ELSE, s) in let nrb1 = @(AUTOPESTAT, b1, r, x),
332
Automaton-Like Code
nrb2 = @(AUTOPESTAT, b2, r, x) in let nr1 = @(PJ1, DeltaState, Statement, nrb1), nb1 = @(PJ2, DeltaState, Statement, nrb1), nr2 = @(PJ1, DeltaState, Statement, nrb2), nb2 = @(PJ2, DeltaState, Statement, nrb2) in let nbs1 = @(GIF_THEN_ELSE, Statement, @(IS_IF_INSIDE, nb1), nb1, let label = @(CAL_BLOCK_LABEL, x, nr1) in @(MK_SEQ, nb1, @(MK_GOTO, label))), nbs2 = @(GIF_THEN_ELSE, Statement, @(IS_IF_INSIDE, nb2), nb2, let label = @(CAL_BLOCK_LABEL, x, nr2) in @(MK_SEQ, nb2, @(MK_GOTO, label))) in let pv = @(PEEXPR, e, r) in @(GIF_THEN_ELSE, @(And, DeltaState, Statement), @(IS_PEVAL, pv), let v = @(GET_PEVAL, pv) in @(GIF_THEN_ELSE, @(And, DeltaState, Statement), @(IS_BVAL, v), let bv = @(GET_BVAL, v) in @(GIF_THEN_ELSE, @(And, DeltaState, Statement), bv, @(ANDS, DeltaState, Statement, nr1, nb1), @(ANDS, DeltaState, Statement, nr2, nb2)), @(ANDS, DeltaState, Statement, r, @(MK_IFTHEL, e, nb1, nb2))), @(ANDS, DeltaState, Statement, r, @(MK_IFTHEL, e, nb1, nb2))), %-- While-Loop Statement --% \(e : TExpression, x0 : @(And, DeltaState, Statement)) let b = @(GET_WHILE_STAT, s) in let nb = @(PESTAT, b, r) in @(ANDS, DeltaState, Statement, r, @(MK_WHILE, e, nb)), %-- Sequence Statement --%
333
Automaton-Like Code
\(x1 : @(And, DeltaState, Statement), x2 : @(And, DeltaState, Statement)) let b1 = @(GET_SEQ_HEAD, s), b2 = @(GET_SEQ_TAIL, s) in let nrb1 = @(AUTOPESTAT, b1, r, x) in let nr1 = @(PJ1, DeltaState, Statement, nrb1), nb1 = @(PJ2, DeltaState, Statement, nrb1) in let nrb2 = @(AUTOPESTAT, b2, nr1, x) in let nr2 = @(PJ1, DeltaState, Statement, nrb2), nb2 = @(PJ2, DeltaState, Statement, nrb2) in let nr = @(COMP_DSTATE, nr1, nr2), nb = @(MK_SEQ, nb1, nb2) in @(ANDS, DeltaState, Statement, nr, nb), %-- Output Statement --% \(e : TExpression) @(ANDS, DeltaState, Statement, r, s), %-- Input Statement --% \(i : Identifier) @(ANDS, DeltaState, Statement, r, s), %-- Label Statement --% \(l : Identifier, x0 : @(And, DeltaState, Statement)) let b = @(GET_LABEL_STAT, s) in let nrb = @(AUTOPESTAT, b, r, x) in let nr = @(PJ1, DeltaState, Statement, nrb), nb = @(PJ2, DeltaState, Statement, nrb) in @(ANDS, DeltaState, Statement, nr, @(MK_LABEL, l, nb)), %-- Goto Statement --% \(l : Identifier) @(ANDS, DeltaState, Statement, r, s)); dec AUTOPESTATLIST : [Statement -> DStateList -> DStateList -> DStateList]; def AUTOPESTATLIST = \(s : Statement, dsl : DStateList, gdsl : DStateList) @(GIF_THEN_ELSE, DStateList, @(IS_EMPTY, DState, dsl), @(NIL, DState), let hdsl = @(HEAD, DState, dsl), tdsl = @(TAIL, DState, dsl) in let r = @(PJ1, DeltaState, Statement, hdsl), b = @(PJ2, DeltaState, Statement, hdsl) in let nrs = @(AUTOPESTAT, s, r, gdsl) in let nr = @(PJ1, DeltaState, Statement, nrs), ns = @(PJ2, DeltaState, Statement, nrs) in let nhdsl = @(ANDS, DeltaState, Statement, r, ns),
334
Automaton-Like Code
335
ntdsl = @(AUTOPESTATLIST, s, tdsl, gdsl) in @(CONS, DState, nhdsl, ntdsl));
13.4.4
Node Reachability Checking
Since we are not sure that whether a state in the automata generated in reachable or not, we need to check whether a state in the automata is reachable or not, if it is not, just delete it. dec REACH_LABEL : [Statement -> Identifier -> Bool]; def REACH_LABEL = \(s : Statement, n : Identifier) @(s, Bool, %-- Null Statement --% FF, %-- Assignment Statement --% \(i : Identifier, e : TExpression) FF, %-- If-Then Statement --% \(e : TExpression, x : Bool) let b = @(GET_IFTHEN_STAT, s) in @(REACH_LABEL, b, n), %-- If-Then-Else Statement --% \(e : TExpression, x1 : Bool, x2 : Bool) let b1 = @(GET_IFTHEL_THEN, s), b2 = @(GET_IFTHEL_ELSE, s) in @(GIF_THEN_ELSE, Bool, @(REACH_LABEL, b1, n), TT, @(REACH_LABEL, b2, n)), %-- While-Loop Statement --% \(e : TExpression, x : Bool) let b = @(GET_WHILE_STAT, s) in @(REACH_LABEL, b, n), %-- Sequence Statement --% \(x1 : Bool, x2 : Bool) let b1 = @(GET_SEQ_HEAD, s), b2 = @(GET_SEQ_TAIL, s) in @(GIF_THEN_ELSE, Bool, @(REACH_LABEL, b1, n), TT, @(REACH_LABEL, b2, n)),
Automaton-Like Code
336
%-- Output Statement --% \(e : TExpression) FF, %-- Input Statement --% \(i : Identifier) FF, %-- Label Statement --% \(l : Identifier, x : Bool) @(IDEN_EQ, l, n), %-- Goto Statement --% \(l : Identifier) let b = @(GET_LABEL_STAT, s) in @(REACH_LABEL, b, n)); dec REACHABLE_NODE : [DStateList -> Nat -> Bool]; def REACHABLE_NODE = \(dsl : DStateList, n : Nat) @(GIF_THEN_ELSE, Bool, @(IS_EMPTY, DState, dsl), FF, let hdsl = @(HEAD, DState, dsl), tdsl = @(TAIL, DState, dsl) in let r = @(PJ1, DeltaState, Statement, hdsl), s = @(PJ2, DeltaState, Statement, hdsl) in @(GIF_THEN_ELSE, Bool, @(REACH_LABEL, s, @(NUMB2IDENT, n)), TT, @(REACHABLE_NODE, tdsl, n))); dec DO_REACH_CHECK : [Nat -> DStateList -> DStateList -> DStateList]; def DO_REACH_CHECK = \(n : Nat, dsl : DStateList, gdsl : DStateList) @(GIF_THEN_ELSE, DStateList, @(IS_EMPTY, DState, dsl), @(NIL, DState), let hdsl = @(HEAD, DState, dsl), tdsl = @(TAIL, DState, dsl) in let ntdsl = @(DO_REACH_CHECK, @(SS, n), tdsl, gdsl) in @(GIF_THEN_ELSE, DStateList, @(REACHABLE_NODE, gdsl, n), @(CONS, DState, hdsl, ntdsl),
Automaton-Like Code
337
ntdsl)); def REACHABLE_CHECK = \(dsl : DStateList) @(DO_REACH_CHECK, OO, dsl, dsl)
13.5
Generation of Automata-Like Code
13.5.1
Creation of DeltaState
The function TPROG ENV is used to create a type environment from a given ToyC program. def TPROG_ENV = \(p : Program) @(p, TEnv, \(i : Identifier, d : Declaration, s : Statement) @(TDECL, d, EMPTY_TENV)); dec TENV2DSTATE : [TEnv -> DeltaState];
13.5.2
From Automata to ToyC Statements
The following table shows a general structure of the automata. The first row is the static states (DeltaState) and the second row is the corresponding statements. The following function represents the translation process from automata δ0 S0
δ1 S1
... ...
δn Sn
Table 13.5: Structure of Automata to ToyC statements. dec DO_DLST2ST : [Nat -> DStateList -> Statement]; def DO_DLST2ST = \(n : Nat, dsl : DStateList) @(GIF_THEN_ELSE, Statement, @(IS_EMPTY, DState, dsl), MK_NULL_STAT,
Automaton-Like Code
338
let hdsl = @(HEAD, DState, dsl), tdsl = @(TAIL, DState, dsl) in let dst = @(PJ1, DeltaState, Statement, hdsl), sta = @(PJ2, DeltaState, Statement, hdsl) in let labid = @(NUMB2IDENT, n) in let lsta = @(MK_LABEL, labid, sta), osta = @(DO_DLST2ST, @(SS, n), tdsl) in @(MK_SEQ, lsta, osta)); dec DLST2STATEMENT : [DStateList -> Statement]; def DLST2STATEMENT = \(dsl : DStateList) @(DO_DLST2ST, OO, dsl);
13.5.3
Generation of ToyC Program
The following function is what we are going to obtain finally - the function which translate a SCADE program into a ToyC program. dec AutomataCodeGen : [Node_Def -> Program]; def AutomataCodeGen = \(nd : Node_Def) let p = @(NodeCodeGen, nd) in @(p, Program, \(i : Identifier, d : Declaration, s : Statement) let r = @(TDECL, d, EMPTY_TENV) in let dstate = @(TENV2DSTATE, r) in let dsl = @(AUTO_GEN_DST_LST, dstate) in let z = @(GET_WHILE_STAT, s) in let ndsl1 = @(AUTOPESTATLIST, z, dsl, dsl), ndsl2 = @(REACHABLE_CHECK, ndsl1) in let nz1 = @(DLST2STATEMENT, ndsl2), nz2 = @(MK_WHILE, @(BOOL_EXPR, TT), nz1) in @(PROG, i, d, nz2));
Part V
Epilogue
339
Chapter 14
Related Works 14.1
The Synchronous Approach
The many synchronous languages such as Signal, ESTEREL, and LUSTRE are built on a common mathematical framework that combines synchrony (i.e., time advances in lock step with one or more clocks) with deterministic concurrency. This section explores the reasons for choosing such an approach and its ramifications.
14.1.1
Fundamentals of Synchrony
The primary goal of a designer of safety-critical embedded systems is convincing himself or herself, the customer, and certification authorities that the design and its implementation is correct. At the same time, he or she must keep development and maintenance costs under control and meet nonfunctional constraints on the design of the system, such as cost, power, weight, or the system architecture by itself (e.g., a physically distributed system comprising intelligent sensors and actuators, supervised by a central computer). Meeting these objectives demands design methods and tools that integrate seamlessly with existing design flows and are built on solid mathematical foundations. The need for integration is obvious: confidence in a design is paramount, and anything outside the designer’s experience will almost certainly reduce that confidence. The key advantage of using a solid mathematical foundation is the ability to reason formally about the operation of the system. This facilitates certification because it reduces ambiguity and makes it possible to construct proofs about the operation of the system. This also improves the implementation process because it enables the program manipulations needed to automatically construct different implementations, useful, for example, for meeting nonfunctional constraints. 340
Related Works
341
In the 1980s, these observations lead to the following decisions for the synchronous languages: 1. Concurrency - The languages must support functional concurrency, and they must rely on notations that express concurrency in a user-friendly manner. Therefore, depending on the targeted application area, the languages should offer as a notation block diagrams (also called dataflow diagrams), or hierarchical automata, or some imperative type of syntax, familiar to the targeted engineering communities. Later, in the early nineties, the need appeared for mixing these different styles of notations. This obviously required that they all have the same mathematical semantics. 2. Simplicity - The languages must have the simplest formal model possible to make formal reasoning tractable. In particular, the semantics for the parallel composition of two processes must be the cleanest possible. 3. Synchrony - The languages must support the simple and frequently-used implementation models, where all mentioned actions are assumed to take finite memory and time.
14.1.2
Synchrony and Concurrency
Combining synchrony and concurrency while maintaining a simple mathematical model is not so straightforward. Here, we discuss the approach taken by the synchronous languages. Synchrony divides time into discrete instants. This model is pervasive in mathematics and engineering. It appears in automata, in the discrete-time dynamical systems familiar to control engineers, and in synchronous digital logic familiar to hardware designers. Hence it was natural to decide that a synchronous program would progress according to successive atomic reactions. We write this for convenience using the “pseudo-mathematical” statement P ≡ Rω , where R denotes the set of all possible reactions and the superscript ω indicates nonterminating iterations.
14.2
Synchronous Programming Language ESTEREL
Most synchronous models and languages are imperative ones – e.g., SCCS, ESTEREL, SML, STATECHARTS – and therefore their programming style is very different. Comparison experiments undertaken with ESTEREL showed that some problems could fit better with the imperative style, while others did not. This seems to indicate that a good reactive programming tool should offer the possibility of mixing both approaches.
Related Works
342
The pioneering work on ESTEREL has led to propose a general structure for the object code of synchronous programs: a finite automaton whose transition consists of executing a linear piece of code and corresponds to an elementary reaction of the program. Since the transition code has no loop, its execution time can be quite accurately evaluated on a given machine; this enables us to accurately bound the reaction time of the program, thus allowing the synchronous hypothesis to be checked. There are several synchronous languages available right now including ESTEREL, SIGNAL, STATECHARTS, SML and several hardware description languages. ESTEREL is an imperative and synchronous programming language. The goal of the ESTEREL project is to develop a real-time language based on a rigorous formal mode, and actually to develop simultaneously the language, its semantics and its implementation. The language is rather unclassical since it is purely synchronous, deterministic, and based on a multiform notion of time, while all parallel and “real-time” languages such as Ada, CSP, LTR, Occam, RTL/2 are asynchronous, nondeterministic, and consider only one notion of “absolute” time for their temporal primitives. For notion of time, an essential point is that manipulating only a notion of “physical time” measured in seconds is not enough for most real-time applications. In fact many control algorithms use much more varied notion of time, if one considers a “time unit” as being just a repetitive event of some kind. For example, a mile runner knows four different natural time units: the second, of course, but also the step, the meter run and the lap elapsed. All these units are of the same temporal nature, and there is no reason not to write statements such as “delay 3 meter” or “at 3 laps do ”. A typical ESTEREL program will look like as follows: during 4 LAP do every LAP do during 100 METER do RUN-SLOWLY end; during 100 STEPS or 57 SECOND do every STEP do [ JUMP_HIGH || BREATHE_OUT ] end end; WALK_SLOWLY end end
The role of the “during” construct (which is actually not really primitive and will be derived from two “upto” and “uptonext” constructs) is to define the temporal extent of its body, measured versus the appropriate time unit, the
Related Works
343
“every” construct being simply a loop over y “during” construct. As another example, here is a natural way of specifying an exact speed measure: var SPEED : int in loop QPEED := 0; during 10 seconds do every METER do SPEED := SPEED + 1 end end; emit SPEED_MEASURE(SPEED) end end
With the strong synchronous hypothesis one can guarantee that an exact speed measure will be emitted exactly 10 seconds. Intuitively, an ESTEREL program consists of a collection of nested, concurrently-running threads described using a traditional imperative syntax whose execution is synchronized to a single, global clock. At the beginning of each reaction, each thread resumes its execution from where it paused (e.g., at a pause statement) in the last reaction, executes traditional imperative code (e.g., assigning the value of expressions to variables and making control decisions), and finally either terminates or pauses in preparation for the next reaction. Threads communicate exclusively through signals: ESETREL’s main data type that represents a globally-broadcast event. A coherence rule guarantees an ESETREL program behaves deterministically: all threads that check a signal in a particular reaction see the signal as either present (i.e., the event has occurred) or absent, but never both. Because it is intended to represent an event, the presence of a signal does not persist across reactions. Precisely, a signal is present in a reaction if and only if it is emitted by the program or is made present by the environment. Thus, signals behave more like wires in a synchronous digital circuit than variables in an imperative program. Preemption statements, which allow a clean, hierarchical description of statemachine-like behavior, are one of ESETRELs novel features. Unlike a traditional if-then-else statement, which tests its predicate once before the statement is executed, an ESETREL preemption statement (e.g., abort) tests its predicate each reaction in which its body runs. Various preemption statements provide a choice of whether the predicate is checked immediately, whether the body is allowed to run when the predicate is true, whether the predicate is checked before or after the body runs, and so forth. To illustrate how ESETREL can be used to describe control behavior, consider the program fragment describing the user interface of a portable compact
Related Works
344
Command emit S present S then p else q end pause p; q loop p end await S p || q abort p when S suspend p when S sustain S run M
Meaning Make signal S present immediately If signal S is present, perform p otherwise q Stop this thread of control until the next reaction Run p then q Run p; restart when it terminates Pause until the next reaction in which S is present Start p and q together; terminate when both have terminated Run p up to, but not including, a reaction in which S is present Run p except when S is present Means loop emit S; pause end Expands to code for module M
Table 14.1: Some Basic ESTEREL Statemens
disc player. It has input signals for play and stop and a lock signal that causes these signals to be ignored until an unlock signal is received, to prevent the player from accidentally starting while stuffed in a bag. Note how the first process ignores the Play signal when it is already playing, and how the suspend statement is used to ignore Stop and Play signals. loop suspend await Play; emit Change when Locked; abort run CodeForPlay when Change end || loop suspend await Stop; emit Change when Locked; abort run CodeForStop
Related Works
345
when Change end || every Lock do abort sustain Locked when Unlock end This example uses ESTEREL’s instantaneous execution and communication semantics to ensure that the code for the play operation, for example, starts exactly when the Play signal arrives. Say the code for Stop is running. In that reaction, the await Play statement terminates and the Change signal is emitted. This prevents the code for Stop from running, and immediately starts the code for Play. Because the abort statement does not start checking its condition until the next reaction, the code for Play starts even though the Change signal is present. As mentioned earlier, ESTEREL regards a reaction as a fixpoint equation, but only permits programs that behave functionally in every possible reaction. This is a subtle point: ESTERELs semantics do allow programs such as cyclic arbiters where a static analysis would suggest a deadlock but dynamically the program can never reach a state where the cycle is active. Checking whether a ESTEREL program is deadlock-free is termed causality analysis, and involves ensuring causality constraints are never contradictory in any reachable state. These constraints arise from control dependencies (e.g, the “;” sequencing operator requires its second instruction to start after the first has terminated, the present statement requires its predicate to be tested before either branch may run) and data dependencies (e.g., emit S must run before present S even if they are running in parallel). Thus, a statement like present S else emit S end has contradictory constraints and is considered illegal, but present S else present T then emit S end end may be legal if signal T is never present in a reaction in which this statement runs. Formally, ESTERELs semantics are based on constructive causality. The understanding of this problem and its clean mathematical treatment is one of ESTEREL’s most significant contributions to the semantic foundations of reactive synchronous systems. Incomplete understanding of this problem is exactly what prevented the designers of the Verilog, VHDL, and Statecharts languages from having truly synchronous semantics. Instead, they have semantics based on awkward delta-cycle microsteps or delayed relation notions with an unclear semantic relation to the natural models models of synchronous circuits and Mealy machines. This produces a gap between simulation behavior and hardware implementations that avoids. From ESTERELs beginnings, optimization, analysis, and verification were considered central to the compilation process. Again, this was only possible be-
Related Works
346
cause of formal semantics. The automata-based V3 compiler came with model checkers (Auto/Graph and later Fc2Tools) based on explicit state space reduction and bisimulation minimization. The netlist-based V4 and V5 compilers used the logic optimization techniques in Berkeley’s SIS environment to optimize its intermediate representation. In the V5 compiler, constructive causality analysis as well as symbolic binary decision diagram (BDD) - based modelchecking (implemented in the Xeve verification tool rely on an implicit state space representation implemented with the efficient TiGeR BDD library.
14.3
Dataflow Language - Lucid
Lucid[32] was born in 1974 at the University of Waterloo in Canada. The original goals in developing Lucid were very modest - to show that conventional, ‘mainstream’ programming could be done in a purely declarative language, one without assignment or goto statements. Lucid is a functional dataflow programming language. Lucid is the first dataflow programming language representing an attempt to extend the success of the dataflow languages like C or Pascal (or the UNIX shell language itself, which also allows commands). There were already a number of dataflow languages before Lucid being invented, but most of them (in particular the “single-assignment” languages) are still heavily influenced by the von Neumann view of computation. There are also a number of pure nonprocedural languages (basically variation on LISP) but they are oriented towards the ‘recursive function’ style of programming, rather than towards dataflow. Lucid is intended to be a clean, nonprocedural language which is nevertheless intended to be used in conjunction with a dataflow approach to programming. The goal of Lucid is to simplify as much as possible the “system design” phase, a Lucid program can be basically viewed as a textual form of a dataflow graph. Furthermore, Lucid is a first-order language; functions cannot be used as the arguments of other functions.
14.4
Signal
Another language quite similar to SCADE is Signal, and comparing both is not an easy task. A main issue here is their distinct semantic model; Signal does not belong to the Kahn family of languages, which is based on functions over sequences, and on functional composition, but on a concept of “programming by constraints”: each Signal construct denotes a finite-memory relation between “hiatonised” sequences, and a program is the intersection of such relations. A program has a bounded memory but it can be relational (i.e., non-deterministic), and the object of SIGNAL clock calculus consists of finding an execution scheme
Related Works
347
such that the program be deterministic and deadlock-free. The free use of histons (i.e., “absent” data symbols) in the semantics makes Signal a more powerful language than SCADE in the sense that the internal clock of program can be faster than the inputs. The drawback of the approach lies in the fact that the clock calculus is much more complex, and can hardly be mentally performed by a programmer. Signal is designed for specifying systems. In contrast with programs, systems must be considered open: a system may be upgraded by adding, deleting or changing components. How can we avoid modifying the rest of the specification while doing this? It is tempting to say any synchronous program P does something at each reaction. In this case we say the activation clock of P coincides with its successive reactions. This is the viewpoint taken for the execution schemes: in an eventdriven system, at least one input event is required to produce a reaction; in a sample-driven system, reactions are triggered by the clock ticks. However, if P is an open system, it may be embedded in some environment that gets activated when P does not. So we must distinguish the activation clock of P from the “ambient” sequence of reactions and take the viewpoint that each program possesses its own, local, activation clock. Parallel composition relates the activation clocks of the different components and the “ambient” sequence of reactions is implicit (not visible to the designer). A language that supports this approach is called a multi-clock language. Signal is a multi-clock language. Signal allows systems to be specified as block diagrams. Blocks represent components or subsystems and can be connected together hierarchically to form larger blocks. Blocks involve signals and relate them via operators. In Signal each signal has an associated clock and the activation clock of a program is the supremum of the clocks of its involved signals. Signals are typed sequences (boolean, integer, real, . . .) whose domain is augmented with the extra value ⊥ that denotes the absence of the signal in a particular reaction. The user is not allowed to manipulate the ⊥ symbol to prevent him from manipulating the ambient sequence of reactions. Signal has a small number of primitive constructs, listed in 14.2. In this table, Xτ denotes the status (absence, or actual carried value) of signal X in an arbitrary reaction τ . In the first two statements, the involved signals are either all present (they are ̸=⊥) or all absent, in the considered reaction. We say that all signals involved have the same clock, and the corresponding statements are called single-clocked. In these first two statements, integer k represents instants at which signals are present, and “op” denotes a generic operation +, ×, . . . extended point-wise to sequences. Note that index k is implicit and does not appear in the syntax. The third and fourth statements are multi-clock ones. In the third statement, B is a boolean signal and true denotes the value “true.” With this set of primitives, Signal supports the two synchronous execution schemes. The default statement is an example of the first scheme, it corresponds to the ESTEREL statement
Related Works
348
Operator Z := X op Y X := X$1 X := U when B X := U default V P | Q
Meaning Zτ ̸=⊥⇔ Xτ ̸=⊥⇔ Yτ ̸=⊥ ∀k.Zk = op(Xk , Yk ) Xτ ̸=⊥⇔ Yτ ̸=⊥ ∀k.Yk = Yk−1 (delay) Xτ = Uτ when Bτ = true otherwise Xτ =⊥ Xτ = Uτ when Zτ ̸=⊥ otherwise Xτ = Vτ Compose P and Q
Table 14.2: Signal Operators and Their Meaning
present U then emit X(?U) else present V then emit X(?V) The generic op statement is an example of the first scheme, it coincides with the LUSTRE statement Z = op(X, Y) More generally, Signal allows the designer to mix reactive communication (offered by the environment) and proactive communication (demanded by the program). The first two statements impose constraints on signals’ clocks, which can be used to impose arbitrary constraints. Say you wish to assert the Boolean-valued expression C(X, Y, Z) always holds. Define B := C(X, Y, Z) and write B := B when B. This statement says B must be equal to B when B, which says B is either true or absent; B false is inconsistent. Therefore, the program B := C(X, Y, Z) | B := B when B states that either X, Y, Z are all absent, or condition C(X, Y, Z) is satisfied. Thus, invariant properties can be included in the specification and are guaranteed to hold by construction. This also allows incomplete or partial designs to be specified.
14.5
Highlights of the Last Fifteen Years
The last twelve years have seen a number of successful industrial uses of the synchronous languages. Here, we describe some of these engagements.
Related Works
14.5.1
349
Getting Tools to Market
The ESTEREL compilers from Berry’s group at INRIA/CMA have long been distributed freely in binary form. The commercial version of ESTEREL was first marketed in 1998 by the French software company Simulog. In 1999, this division was spun off to form the independent company Esterel Technologies. Esterel Technologies recently acquired the commercial SCADE environment for LUSTRE programs to combine two complementary synchronous approaches. In the early nineties, Signal was licensed to Techniques Nouvelles pour l’Informatique TNI, a French software company located in Brest, in western France. From this license, TNI developed and marketed the Sildex tool in 1993. Several versions have been issued since then, most recently SildexV6. Sildex supports hierarchical dataflow diagrams and state diagrams within an integrated GUI, and allows Simulink and Stateflow discrete-time models from MATLAB to be imported. Globally Asynchronous Locally Synchronous (GALS) modelling is supported. Model checking is a built-in service. The RTBuilder add-on package is dedicated to real-time and timeliness assessments. TNI recently merged with Valiosys, a startup operating in the area of validation techniques, and Arexys, a startup developing tools for system-on-chip design. Together, they will offer tools for embedded system design.
14.5.2
The Cooperation with Airbus and Schneider Electric
LUSTRE’s users pressed for its commercialization. In France in the 1980s, two big industrial projects including safety-critical software were launched independently: the N4 series of nuclear power plants, and the Airbus A320 (the first commercial fly-by-wire aircraft). Consequently, two companies, Aerospatiale (now Airbus Industries) and Merlin-Gerin (now Schneider Electric) were faced with the challenge of designing highly safety-critical software, and unsuccessfully looked for suitable existing tools. Both decided to build their own tools: SAO at Aerospatiale, and SAGA at Schneider Electric. Both of these proprietary tools were based on synchronous data-flow formalisms. SAGA used LUSTRE because of an ongoing cooperation between Schneider Electric and the LUSTRE research group. After some years of successfully using these tools, both companies were faced with the problem of maintaining and improving them, and admitted that it was not their job. Eventually, the software company Verilog undertook the development of a commercial version of SAGA that would also subsume the capabilities of SAO. This is the SCADE environment, and it currently offers an editor that manipulates both graphical and textual descriptions; two code generators, one of which is accepted by certification authorities for qualified software production; a simulator; and an interface to verification tools such as prover plug-in. 2001 has brought further convergence: Esterel Technologies has purchased the SCADE business unit.
Related Works
350
SCADE has been used in many industrial projects, including the integrated nuclear protection system of the new French nuclear plants (Schneider Electric), part of the flight control software of the Airbus A340-600, and re-engineered track control system of the Hong Kong subway (CS Transport).
14.5.3
The Cooperation with Dassault Aviation
Dassault Aviation was one of the earliest supporters of the Esterel project, and has long been one of its major users. They have conducted many large case studies, including the full specification of the control logic for an airplane landing gear system and a fuel management system that handles complex transfers between various internal and external tanks. These large-scale programs, provided in ESTEREL’s early days, were instrumental in identifying new issues for further research, to which Dassault engineers often brought their own preliminary solutions. Here are three stories that illustrate the pioneering role of this company on ESTEREL. Some of the Dassault examples did exhibit legitimate combinational loops when parts were assembled. Compiling the full program therefore required constructive causality analysis. This was strong motivation for tackling the causality issue seriously from the start; it was not a purely theoretical question easily sidestepped. Engineers found a strong need for modular or separate compilation of submodules, again for simply practical reasons. Indeed, some of the optimization techniques for reducing program size would not converge on global designs because they were simply too big. Some partial solutions have been found [36], but a unified treatment remains a research topic. Object-oriented extensions were found to be desirable for “modelling in the large” and software reuse. A proposal for an ESTEREL/UML coupling was drafted by Dassault, which has since been adopted by and extended in the commercial ESTEREL tools.
14.5.4
The Cooperation with Snecma
During the 1980s, Signal was developed with the continuing support of CNET. This origin explains the name. The original aim of Signal was to serve as a development language for signal processing applications running on digital signal processors (DSPs). This led to its dataflow and graphical style and its handling of arrays and sliding windows, features common to signal processing algorithms. Signal became a joint trademark of CNET and INRIA in 1988. Signal was licensed to TNI in the early nineties, and the cooperation between Snecma and TNI started soon thereafter. Snecma was searching for modelling tools and methods for embedded systems that closely mixed dataflow and automaton styles with a formally sound basis. Aircraft engine control was the
Related Works
351
target application. As an early adopter of Sildex, Snecma largely contributed to the philosophy and style of the tool set and associated method.
14.5.5
The Cooperation with Texas Instruments
Engineers at Texas Instruments design large DSP chips targeted at wireless communication systems. The circuitry is typically synthesized from a specification written in VHDL and the validation process consists of confronting the VHDL with a reference model written in C. Currently, validation is performed by simulating test sequences on both models and verifying their behavior is identical. The quality of the test suite, therefore, determines the quality of the validation. ESTEREL was introduced in the design flow as part of a collaboration with Texas Instruments’ Wireless Terminals Business Center, located at VilleneuveLoubet (France). Parts of the C specification were rewritten in ESTEREL, mostly at specific safety-critical locations. The language provided formal semantics and an FSM interpretation so that tests could be automatically synthesized with the goal of attaining full coverage of the control state space. An early result from these experiments showed that the usual test suites based on functional coverage only exercised about 30% of the full state space of these components. With the help of dedicated algorithms on BDD structures full state coverage was achieved for the specifications considered, and further work was conducted on transition coverage, i.e., making sure the test suite exercised all program behaviors. Of course the coverage checks only apply to the components represented in ESTEREL, which are only a small part of a much larger C specification. The work conducted in this collaboration proved highly beneficial, since such simulation-based validation seems to be prevalent in industry. With luck, the methodology established here can easily be applied elsewhere. Automatic test suite generation with guaranteed state/transition coverage is a current marketing lead for ESTEREL.
14.6
Some New Technology
Modelling and code generation are often required in the design flow of embedded systems. In addition, verification and test generation are of increasing importance due to the skyrocketing complexity of embedded systems. In this section we describe synchronous-language-related efforts in these areas.
14.6.1
Handling Arrays
In a dataflow language, arrays are much more than a data structure; they are a very powerful way of structuring programs and defining parameterized regular
Related Works
352
networks. For example, it is often useful to apply the same operator to all elements of an array (the “map” operation). Avoiding run-time array-indexout-of-bounds errors is another issue in the context of synchronous languages. Consequently, only restricted primitives must be provided to manipulate arrays. Arrays were first introduced in LUSTRE for describing circuits (arrays are virtually mandatory when manipulating bits). The LUSTRE-V4 compiles arrays by expansion: in both the circuit and the generated code an array is expanded into one variable per element. This is suitable for hardware, but can be very inefficient in software. Because instantaneous dependencies among array elements are allowed, compiling this mechanism into code with arrays and loops is problematic. The LUSTRE-V4 experience showed that arrays are manipulated in a few common ways in most applications, suggesting the existence of a small set of “iterators.” These iterators, which are well known in the functional language community, are convenient for most applications and can be easily compiled into small, efficient sequential code [40]. With these iterators, arrays can be compiled into arrays, operations on arrays can be compiled into loops, and many implicit intermediate arrays can be replaced with scalar variables (“accumulators”) in the generated code. The SCADE tool implements these techniques and similar technology is provided in the Sildex tool. One typical operator has the form A[B], where B is an array of integers and A is an array of any type. This is treated as a composition of maps where B represents a (possibly multidimensional) iteration over the elements of A. Extensions of this basic mechanism are provided.
14.6.2
New Techniques for Compiling ESTEREL
The first ESTEREL compilers were based on the literal interpretation of its semantics written in Plotkin’s structural operational style. The V1 and V2 compilers built automata for ESTEREL programs using Brzozowski’s algorithm for taking derivatives of regular expressions. Later work by Gonthier and others produced the V3 compiler, which drastically accelerated the automata-building process by simulating the elegant intermediate code (IC) format a concurrent control-flow graph hanging from a reconstruction tree representing the hierarchical nesting of preemption statements. The automata techniques of the first three compilers work well for small programs, and produce very fast code, but they do not scale well to industrial-sized examples because of the state explosion problem. Some authors have tried to improve the quality of automata code by merging common elements to reduce its size or performing other optimizations. Nevertheless, none of these techniques are able to compile concurrent programs longer than about 1000 lines. The successors to the automata compilers are based on translating ESTEREL into digital logic. This translation is natural because of ESTEREL’s synchronous, finite-state semantics. In particular, unlike automata, it is nearly
Related Works
353
one-to-one (each source statement becomes a few logic gates), so it scales very well to large programs. Although the executables generated by these compilers can be much smaller than those from the automata compilers, there is still much room for improvement. For example, there are techniques for reducing the number of latches in the generated circuit and improving both the size and speed of the generated code. Executables generated by these compilers simply simulate the net list after topologically sorting its gates. The V4 compiler, the earliest based on the digital logic approach, met with resistance from users because it considered incorrect many programs that compiled under the V3 compiler. The problem stemmed from the V4 compiler insisting that the generated net list be acyclic, effectively requiring control and data dependencies to be consistent in every possible state, even those which the program cannot possibly enter. The V3 compiler, by contrast, considers control and data dependencies on a state-by-state basis, allowing them to vary, and only considers states that the program appears to be able to reach. The solution to the problem of the overly restrictive acyclic constraint came from recasting ESTEREL’s semantics in a constructive framework. Initial inspiration arose from Malik’s work on cyclic combinational circuits. Malik considers these circuits correct if three-valued simulation produces a two-valued result in every reachable state of the circuit. In ESTEREL, this translates into allowing a cyclic dependency when it is impossible for the program to enter a state where all the statements in the cycle can execute in a single reaction. These semantics turn out to be a strict superset of the V3 compiler’s. Checking whether a program is constructive - that no static cycle is ever activated - is costly. It appears to require knowledge of the program’s reachable states, and the implementation of the V5 compiler relies on symbolic state space traversal, as described by Shiple et al. Slow generated code is the main drawback of the logic network-based compilers, primarily because logic networks are a poor match to imperative languages such as C or processor assembly code. The program generated by these compilers wastes time evaluating idle portions of the program since it is resigned to evaluating each gate in the network in every clock cycle. This inefficiency can produce code a hundred times slower than that from an automata-based compiler. Two recently-developed techniques for compiling ESTEREL attempt to combine the capacity of the netlist-based compilers with the efficiency of code generated by the automata-based compilers. Both represent an ESTEREL program as a graph of basic blocks related by control and data dependencies. A scheduling step then orders these blocks and code is generated from the resulting scheduled graph. The compiler of Edwards uses a concurrent control-flow graph with explicit data dependencies as its intermediate representation. In addition to traditional assignment and decision nodes, the graph includes fork and join nodes that start and collect parallel threads of execution. Data dependencies are used to establish the relative execution order of statements in concurrently-running
Related Works
354
threads. These are computed by adding an arc from each emission of a signal to each present statement that tests it. An executable C program is generated by stepping through the control-flow graph in scheduled order and tracking which threads are currently running. Fork and join nodes define thread boundaries in the intermediate representation. Normally, the code for each node is simply appended to the program being generated, but when the node is in a different thread, the compiler inserts code that simulates a context switch with a statement that writes a constant into a variable that holds the program counter for the thread being suspended followed by a multi-way branch that restores the program counter of the thread being resumed. This approach produces fast executables (consistently about half the speed of automata code and as many as a hundred times faster than the netlist-based compilers) for large programs, but is restricted to compiling the same statically acyclic class of programs as the V4 compiler. Because the approach relies heavily on the assumption the existence of a static schedule, it is not clear how it can be extended to compile the full class of constructive programs accepted by the V5 compilers. Another, related compilation technique due to Weil et al. compiles ESTEREL programs using a technique resembling that used by compiled-code discrete-event simulators. French et al. [54] describe the basic technique: the program is divided into short segments broken at communication boundaries and each segment becomes a separate C function invoked by a centralized scheduler. Weil et al.’s SAXO-RT compiler works on a similar principle. It represents an ESTEREL program as an event graph: each node is a small sequence of instructions, and the arcs between them indicate activation in the current reaction or the next. The compiler schedules the nodes in this graph according to control and data dependencies and generates a scheduler that dispatches them in that order. Instead of a traditional event queue, it uses a pair of bit arrays, one representing the events to run in the current reaction, the other representing those that will run in the next. The scheduler steps through these arrays in order, executing each pending event, which may add events to either array. Because the SAXO-RT compiler uses a fixed schedule, like Edwards’ approach and the V4 compiler it is limited to compiling statically acyclic programs and therefore does not implement ESTEREL’s full constructive semantics. However, a more flexible, dynamic scheduler might allow the SAXO-RT compiler to handle all constructively valid ESTEREL programs. An advantage of these two latter approaches, which only the SAXO-RT compiler has exploited, is the ability to impose additional constraints on the order in which statements execute in each cycle, allowing some control over the order in which inputs may arrive and outputs produced within a cycle. While this view goes beyond the idealized, zero-delay model of the language, it is useful from a modeling or implementation standpoint where system timing does play a role.
Related Works
14.6.3
355
Observers for Verification and Testing
Synchronous observers are a way of specifying properties of programs, or, more generally, to describe non deterministic behaviors. A well-known technique for verifying programs consists of specifying the desired property by means of a language acceptor (generally a B¨ uchi automaton) describing the unwanted traces of the program and showing that the synchronous product of this automaton with the program has no behaviors, meaning that no trace of the program is accepted by the automaton. We adopt a similar approach, restricting ourselves to safety properties for the following reasons. First, for the applications we address, almost all the critical properties are safety properties: nobody cares that a train eventually stops, it must stop before some time or distance to avoid an obstacle. Second, safety properties are easier to specify since a simple finite automaton is sufficient (no need for B¨ uchi acceptance criterion). Finally, safety properties are generally easier to verify. In particular, they are preserved by abstraction: if an abstraction of a program is safe, it follows that the concrete program is, too. Verification by observers is natural for synchronous languages because the synchronous product used to compute trace intersection is precisely the parallel composition provided by the languages. This means safety properties can be specified with a special program called an observer that observes the variables or signals of interest and at each step decides if the property is fulfilled up to this step, emitting a signal that indicates whether it was. A program satisfies the property if and only if the observer never complains during any execution. This technique for specifying properties has several advantages: 1. The property can be written in the same language than the program. It is surely an argument to convince users to write formal specifications: they don’t need to learn another formalism. 2. The observer can be executed; so testing it is a way to get convinced that it correctly expresses the user’s intention. It can also be run during the actual execution of the program, to perform auto-test. Several verification tools use this technique: in general, such a tool takes a program, an observer of the desired property, and an observer of the assumptions on the environment under which the property is intended to hold. Then, it explores the set of reachable states of the synchronous product of these three components, checking that if the property observer complains in a reachable state then another state was reached before where the assumption observer complained. In other words, it is not possible to violate the property without first violating the assumption. Such an “assume-guarantee” approach allows also compositional verification. Adding observers has other applications, such as automatic testing. Here, the property observer is used as an oracle that decides whether a test passes.
Related Works
356
Even more interesting is the use of an assumption observer, which may be used to automatically generate realistic test sequences that satisfy some assumptions. This is often critical for the programs we consider since they often control their environments, at least in part. Consequently one cannot generate interesting test sequences or check that the control is correct without assuming that the environment obeys the program’s commands.
14.7
Major Lessons
Now that time has passed, research has produced results, and usage has provided feedback, some lessons can be drawn. We summarize them in this section.
14.7.1
The Strength of the Mathematical Model
The main lesson from the last decade has been that the fundamental model (time as a sequence of discrete instants and parallel composition as a conjunction of behaviors) has remained valid and was never questioned. We believe this will continue, but it is interesting to see how the model resisted gradual shifts in requirements. In the 1980s, the major requirement was to have a clean abstract notion of time in which “delay 1; delay 2” exactly equals “delay 3” (due to G. Berry), something not guaranteed by realtime languages (e.g., Ada) or operating systems at that time. Similarly, deterministic parallel composition was considered essential. The concept of reaction answered the first requirement and parallel composition as a conjunction answered the second. These design choices allowed the development of a first generation of compilers and code generators. Conveniently, unlike the then-standard asynchronous interleaving approach to concurrency, this definition of parallel composition greatly reduces the stateexplosion problem and made program verification feasible. Compiling functional concurrency into embedded code running under the simple schemes allowed critical applications to be deployed without the need for any operating system scheduler. This was particularly attractive to certification authorities since it greatly reduced system complexity. For them, standard operating systems facilities such as interrupts were already dangerously unpredictable. The desire for simplicity in safety-critical real-time systems appears to be universal, so the synchronous approach seems to be an excellent match for these systems. In the late 1980s, it appeared that both imperative and dataflow styles were useful and that mixing them was desirable. The common, clean underlying mathematics allowed multiple synchronous formalisms to be mixed without compromising rigor or mathematical soundness. For example, the imperative ESTEREL language was compiled into a logic-netlist-based intermediate representation that enabled existing logic optimization technology to be used to
Related Works
357
optimize ESTEREL programs. In the early 1990s, research on program verification and synthesis lead to the need for expressing specifications in the form of invariants. Unlike programs, invariants are generally not deterministic, but fortunately nondeterminism fits smoothly into the synchronous framework. Signal has considered nondeterministic systems from the very beginning but has supplied it in a disciplined forms. Unlike other formalisms in which parallel composition is nondeterministic, nondeterminism in Signal is due exclusively to a fixpoint equation having multiple solutions, a property that can be formally checked whenever needed. Similarly, assertions were added to LUSTRE to impose constraints on the environment for use in proofs. Defining the behavior in a reaction as a fixpoint equation resulted in subtle causality problems, mainly for ESTEREL and Signal (LUSTRE solves this trivially by forbidding delay-free loops). For years this was seen a drawback of synchronous languages. The hunt for causality analysis in ESTEREL was like a Mary Higgins Clark novel: each time the right solution was reported found, a flaw in it was reported soon thereafter. However, G. Berry’s constructive semantics does appear to be the solution and has stood unchallenged for more than five years. A similar struggle occurred for Signal behind the scenes, but this mathematical difficulty turned out to be a definite advantage. In 1995, dependency was added as an first-class operator in both Signal and the DC and DC+ common formats for synchronous languages. Besides encoding causality constraints, this allowed scheduling constraints to be specified explicitly and has proven to be an interesting alternative to the use of the imperative “;”. In the late 1990s, the focus has extended beyond simple programming to the design of systems. This calls for models of components and interfaces with associated behavioral semantics. While this is still research under progress, initial results indicate the synchronous model will continue to do the job. Everything has its limits, including the model of synchrony. But its clean mathematics has allowed the study of synchrony intertwined with other models, e.g., asynchrony.
14.7.2
Compilation Has Been Surprisingly Difficult but Was Worth the Effort
Of the three languages, ESTEREL has been the most challenging to compile because its semantics include both control and data dependencies. We have described the various compilation technologies, and observe that while many techniques are known, none is considered wholly satisfactory. The principle of Signal compilation has not changed since 1988: it consists of solving the abstraction of the program described by the clock and causality calculus. However it was not until 1994 that it was proven that the internal format of the compiler was a canonical form for the corresponding program,
Related Works
358
therefore guaranteeing the uniqueness of this format and establishing the compilation strategy on a firm basis. Properly handling arrays within the compilation process has been a difficult challenge, and no definitive solution exists yet. Compilation usually requires substantially transforming the program, usually making it much harder to trace, especially in the presence of optimization. Unfortunately, designers of safety-critical systems often insist on traceability because of certification constraints. Some effort has been made to offer different trade-offs between efficiency and traceability. Separate compilation is one possible answer, and direct solutions are also available. This has been an important focus for the SCADE/LUSTRE compiler, which is DO-178B certified. Overall, the developed compilation techniques can be regarded as a deep and solid body of technologies.
14.8
Future Challenges
Rather than a complete methodology, synchronous programming is more a step in the overall system development process, which may include assembling components, defining architecture, and deploying the result. A frequently-leveled complaint about the synchronous model is that not all execution architectures comply with it, yet the basic paradigm of synchrony demands global specifications. Exploring the frontiers of synchrony and beyond is therefore the next challenge. In this section we discuss three topics that toe this boundary: architecture modeling, deployment on asynchronous architectures, and building systems from components.
14.8.1
Architecture Modeling
As seen before, synchronous languages are excellent tools for the functional specification of embedded systems. As have discussed, embedded applications are frequently deployed on architectures which do not comply with the execution model of synchrony. Typical instances are found in process industries, automobiles or aircrafts, in which the control system is often distributed over a communication infrastructure consisting of buses or serial lines. Fig. 5 shows a situation of interest. In this figure, a distributed architecture for an embedded system is shown. It consist of three computers communicating via serial lines and a bus. The computer on the left processes data from some sensor, the computer on the right computes controls for some actuator, and the central computer supervises the system. There are typically two sources of deviation from the synchronous execution model in this architecture: 1. The bus and the serial lines may not comply with the synchronous model, unless they have been carefully designed with this objective in mind. The family of Time Triggered Architectures (TTA) is such a synchrony
Related Works
359
compliant architecture. But most are not, and still are used in embedded systems. 2. The A/D and D/A converters and more generally the interfaces with the analog world of sensors and actuators, by definition, lie beyond the scope of the synchronous model. While some architectures avoid the difficulty 1, difficulty 2 cannot be avoided. Whence the following Problem 14.1 (Architecture Modelling) How to Assist the Designer in Understanding how Her/His Synchronous Specification Behaves, when Faced with Difficulties 1 and/or 2? Assume that the considered synchronous specification decomposes as P = P1 || P2 || P3 and the three components are mapped, from left to right, onto the three processors. Assume, for the moment, that each Pi can run “infinitely fast” on its processor. Then the difficulties are concentrated on the communications between these processors: they cannot be considered synchronous. Now, the possible behaviors of each communication element (sensor, actuator, serial line, bus) can be modelled by resourcing to the continuous real-time of analog systems. Discrete time approximations of these models can be considered. Take the example of a serial line consisting of a FIFO queue of length 1, its continuous time model says: always: FIFO can be written if empty just before, and FIFO can be read if full just before where “empty just before” means that the FIFO has been empty for some time interval until now. Now, the corresponding discrete time approximation reads exactly the same. But the above model is just an invariant synchronous system, performing a reaction at each discretization step-it does not, however, behave functionally. As explained this invariant can be specified in Signal. It can also be specified using the mechanism of LUSTRE assertions which are statements of the form assert B, where B is a boolean flow - the meaning is that B is asserted to be always true. This trick can then be applied to (approximately) model all communication elements. The deployment of specification P on the architecture is then modelled as P’, where P’ = ((sensor || P1) || serial || P2 || serial || (P3 || actuator)) || bus
The resulting program can be confronted to formal verifications. By turning nondeterminism into additional variables, it can also be simulated.
Related Works
360
Now, what if the assumption that the three processors run infinitely fast cannot be accepted? All compilers of synchronous languages compute schedulings that comply with the causality constraints imposed by the specification. This way, each reaction decomposes into atomic actions that are partially ordered, call a thread any maximal totally ordered sequence of such atomic actions. Threads, that are too long for being considered instantaneous with respect to the time discretization step, are broken into successive micro-threads executed in successive discretization steps. The original synchronous semantics of each Pi is preserved if we make sure that micro-threads from different reactions do not overlap. We just performed synchronous time refinement. The above technique has been experimented in the SafeAir project using the Signal language, and in the Crisys project using the LUSTRE language. It is now available as a service for architecture modelling with Sildex-V6. So much for Problem 1. However, the alert reader must have noticed in passing that the breaking of threads into micro-threads is generally subtle, and therefore calls for assistance. To this end we consider the next. Problem 14.2 (Architecture Profiling) How to Profile a Scheduled Architecture? Look closely at the discussions on causality and scheduling. Reactions decompose into atomic actions that are partially ordered by the causality analysis of the program. Additional scheduling constraints can be enforced by the designer, provided they do not contradict the causality constraints. This we call the scheduled program; note that it is only partially ordered, not totally. Scheduled programs can be, for instance, specified in Signal by using the statement “U -> X when B”, in doing so, the latter statement is used for encoding both the causality constraints and the additional scheduling constraints set by the designer. Now, assume that (U, V) -> X holds in the current reaction of the considered scheduled program, i.e., (U, V) are the closest predecessors of X in the scheduled program. Denote by dX the earliest date of availability of X in the current reaction. We have dX = max(dU , dV ) + δop ,
(14.1)
where δop is the additional duration of the operator needed (if any) to produce X from U and V. Basically, the trick used consists in replacing the scheduled program X := U op V | (U, V) −> X by (14.1). If we regard (14.1) as a Signal program, the resulting mapping is an homomorphism of the set of Signal programs into itself. Performing this systematically, from a scheduled program, yields an associated profiling program, which models the timing behavior of the original scheduled program. This solves Problem 2. This technique has been implemented in Signal.
Related Works
361
A related technique has been developed at Verimag and FTR & D Grenoble in the framework of the TAXYS project, for profiling ESTEREL designs.
14.8.2
Beyond Synchrony
The execution schemes are relevant for embedded systems, but they do not encompass all needs. While the synchronous model is still pervasive in synchronous hardware, Globally Asynchronous Locally Synchronous (GALS) architectures are now considered for hardware built from components, or for hardware/software hybrid architectures such as encountered in system-on-a-chip designs. Similarly, embedded control architectures, e.g., in automobiles or aircraft, are distributed ones. While some recent design choices favor so-called Time-Triggered architectures complying with our model of synchrony, many approaches still rely on a (partially) asynchronous medium of communication. It turns out that the model (3) and (4) of synchrony was clean enough to allow a formal study of its relationships with some classes of asynchronous models. This allowed the development of methods for deploying synchronous designs on asynchronous architectures. It also allowed the study how robust the deployment of a synchronous program can be, on certain classes of asynchronous distributed real-time architectures. Synchronous programs can be deployed on GALS architectures satisfying the following assumption: Assumption 14.1 The architecture obeys the model of a network of synchronous modules interconnected by point-to-point wires, one per each communicated signal; each individual wire is lossless and preserves the ordering of messages, but the different wires are not mutually synchronized. An important body of theory and techniques have been developed to support the deployment of synchronous programs onto such GALS architectures. We shall give a flavor of this by explaining how LUSTRE and Signal programs can be mapped to an asynchronous network of dataflow actors in the sense of Ptolemy, an instance of an architecture satisfying Assumption 11.1. In this model, each individual actor proceeds by successive salvos; in each salvo, input tokens are consumed and output tokens are produced. Hence each individual actor can be seen as a synchronous machine. Then, tokens travel along the network wires asynchronously, in a way compliant with Assumption 11.1.
14.8.3
Real-Time and Logical Time
There is a striking difference between the two preceding approaches which both describe some aspects of desynchronisation: in the GALS situation, desynchronised programs stay functionally equivalent to their original synchronised versions and thus any design and validation result that has been obtained in the synchronous world remains valid in the desynchronised one. This is not the case
Related Works
362
in the approach here, we have shown how to faithfully mimic the architecture within the synchronous framework but it is quite clear that the program P’ that mimics the implementation will in general not behave like the synchronous specification P. This is due to the fact that the added architectural features do not behave like ideal synchronous communication mechanisms. But this is a constant situation: in many real time systems, real devices do not behave like ideal synchronous devices, and some care has to be taken when extrapolating design and validation results from the ideal synchronous world to the real real-time world. This kind of problems have been investigated within the Crisys Esprit project and some results of this investigation. Some identified reasons for such a thorough extrapolation are as follows: 1. Continuity - The extrapolation of analog computation techniques has played an important part in the origin of synchronous programming, and continuity is clearly a fundamental aspect of analog computing. Therefore, it is not surprising that it also plays a part in synchronous programming. 2. Bounded variability - Continuity is important because it implies some bandwidth limitation. This can be extended to noncontinuous cases, provided systems don’t exhibit unbounded variability. 3. Race avoidance- However, bounded variability is not enough when sequential behaviors are considered, because of intrinsic phenomena like “essential hazards.” This is why good designers take a great care to avoid critical races in their designs, so as to preserve validation results. This implies using in some cases asynchronous programming techniques, e.g., causality chains.
14.8.4
From Programs to Components and Systems
Building systems from components is accepted as the today and tomorrow solution for constructing and maintaining large and complex systems. This also holds for embedded systems, with the additional difficulty that components can be hybrid hardware/software made. Object oriented technologies have had this as their focus for many years. Object oriented design of systems has been supported by a large variety of notations and methods, and this profusion eventually converged to the the Universal Modelling Language (UML)) standard. Synchronous languages can have a significant contribution for the design of embedded systems from components, by allowing the designer to master accurately the behavioral aspects of her/his application. To achieve this, synchronous languages must support the following: 1. Genericity and inheritance - While sophisticated typing mechanisms have been developed to this end in object oriented languages, the behavioral
Related Works
363
aspects are less understood, however. Some behavioral genericity is offered in synchronous languages by providing adequate polymorphic primitive operators for expressing control. Behavioral inheritance in the framework of synchronous languages is far less understood and is still a current topic for research. 2. Interfaces and abstractions - We will show that the synchronous approach offers powerful mechanisms to handle the behavioral facet of interfaces and abstractions. 3. Implementations, separate compilation, and imports - Handling separate compilation and performing imports with finely tuning the behavioral aspects is a major contribution of synchronous languages, which we shall develop hereafter. 4. Multi-faceted notations, `a la UML. 5. Dynamicity - Dynamic creation/deletion of instances is an important feature of large systems. Clearly, this is also a dangerous feature when critical systems are considered. In this section we concentrate on the items 2, 3, 4, and provide some hints for 5. 14.8.4.1
What are the Proper Notions of Abstraction, Interface and Implementation, for Synchronous Components?
We discuss these matters in the context of Signal. Consider the following Signal component P: Y := f(X) | V := g(U) It consists of the parallel composition of two single-clocked statements Y := f(X) and V := f(U), be careful that no relation is specified between the clocks of the two inputs X and U, meaning that they can independently occur at any reaction of P. A tentative implementation of P could follow the second execution scheme, call it P’: for each clock tick do check presence of X; present X then (compute Y:=f(X); emit Y); check presence of U; present U then (compute V:=g(U); emit V); end We can equivalently represent implementation P’ by the following Signal program, we call it again P’: Y := f(X) | Y -> U | V := g(U)
Related Works
364
The added scheduling constraint “Y -> U” in P’ expresses that, when both Y and U occur in the same reaction, emission of Y should occur prior the reading of U. Now, consider another component Q: X := h(V) It can be implemented as Q’: for each clock tick do check presence of V; present V then (compute X:=h(V); emit X); end Now, the composition R ≡ P || Q of these two components is the system Y := f(X) | V := g(U) | X := h(V) it can, for instance, be implemented by R’: for each clock tick check presence of check presence of check presence of end
do U; present U then (compute V:=g(U); emit V); V; present V then (compute X:=h(V); emit X); X; present X then (compute Y:=f(X); emit Y);
Unfortunately, the parallel composition of the two implementations P’ and Q’ is blocking, since P’ will starve, waiting for X to come from component Q’, and symmetrically for Q’. This problem is indeed best revealed using the Signal writing of the parallel composition of P’ and Q’: (Y := f(X) | Y -> U | V := g(U)) | X := h(V) which exhibits the causality circuit: X -> Y | Y -> U | U -> V | V -> X Therefore, P’ cannot be considered as a valid implementation of P for subsequent reuse as a component, since composing P’ and Q’ does not yield a valid implementation of R! This is too bad, since a standard compilation of P would typically yield P’ as an executable code. The following lessons can be derived from this example, about abstraction and implementations of synchronous components: 1. Brute force separate compilation of components can result in being unable to reuse components for designing systems.
Related Works
365
2. Keeping causality and scheduling constraints explicit is required when performing the abstraction of a component. What can be valid abstractions and implementations, for synchronous components? Consider the following component P’: Y := f(X) | X := h(W) | V := g(U) in which X is a local signal. A valid implementation of P’ is P": Y := f(X) | X := h(W) | V := g(U) | Y