Reactive Types for Dataflow-Oriented Software Architectures∗ Barry Norton Department of Computer Science University of Sheffield 211 Portobello Street, Sheffield S1 4DP, UK
[email protected] Abstract Digital signal–processing (DSP) tools, such as Ptolemy, LabView and iConnect, allow application developers to assemble reactive systems by connecting predefined components in generalised dataflow graphs and by hierarchically building new components by encapsulating sub–graphs. We follow the literature in calling this approach dataflow-oriented development. Our previous work has shown how a new process calculus, uniting ideas from previous systems within a compositional theory, can be formally shown to capture the properties of such systems. This paper first re–casts the graphical dataflow-oriented style of design into an underlying textual architecture design language (ADL) and then shows how the previous modelling approach can be seen as a system of process–algebraic behavioural types for such a language, so that type–checking is the mechanism used to statically diagnose the reactivity of applications. We show how both the existing notion of behavioural equivalence and a new behavioural pre-order are involved in this judgement.
1. Introduction The dataflow style provides a well understood basic formalism for designers of embedded digital signal processing systems. For this reason it is no surprise that tools presenting system design in this style became very popular and have retained their dominance. In applications involving measurement/monitoring and control there is a need for rapid application design and for the agility to make rapid design changes and variations. ∗
Research supported by EPSRC grant GR/M99637, British Council grant PRO1163/CH and EU Framework V grant IST2001-34038.
Matt Fairtlough Department of Computer Science University of Sheffield 211 Portobello Street, Sheffield S1 4DP, UK
[email protected] Systems such as LabView and iConnect include huge libraries of components to interface with measurement hardware and must cope with this drive for rapid development since they are used to create ad hoc measurement systems and bespoke embedded controllers respectively. In order to deal with dynamic aspects such as user interaction, LabView introduces control primitives, familiar from imperative languages, as first class and graphically represented entities within a therefore heterogeneous design language. iConnect, on the other hand, like the dataflow models in the Ptolemy framework, chooses rather to generalise over the dataflow model to allow control signals and the behaviour of actors to be both non-deterministic and stateful. Although this choice allows a homogeneous model to be presented to the user, neither iConnect nor Ptolemy provide a general mechanism to check the ‘consistency’ of these graphs, i.e. the absence of buffer underflow or overflow during scheduling. Although Ptolemy provides a mechanism for certain classes of dataflow graph that guarantees consistency and produces a scheduling statically, this only generalises to systems where nondeterminism and statefulness are bounded by strict probabilistic assumptions not met by general components. While dynamic scheduling can avoid buffer underflow — by operating in a data-driven mode, i.e. scheduling a component only when sufficient input data exist — there is no guarantee against buffer overflow and this is left as a run–time error. In a previous paper it was shown how a richer model may be built for these systems using process–algebraic techniques, accounting for the statefulness and nondeterminism of components [21]. The foremost feature of this model was its compositionality. In this paper we show how this can be used as the basis of a system of behavioural types, allowing modular static validation for consistency. Along the way we also recast the design style into an architectural design language.
Const
2. Dataflow-Oriented Design We shall exemplify dataflow-oriented systems with a digital spectrum analyser built from the components shown in Figures 1 through 6. The overall design is shown in Figure 7, which relies also on a further component defined by encapsulation as shown in Figure 8. Later we shall see how the behaviours shown within the component diagrams as transition systems are formally attached as reactive types. For now we observe that these user-oriented representations of the types consist of finite automata where the labels are divided into: output channels, labelled with a ‘!’ and named for the output ports of the component (shown on the right) and the channel ‘r’ which signals ‘readiness for execution’ to the scheduler; input channels, labelled with a ‘?’ and named for the input ports (shown on the right of the component) as well as the channel ‘e’, via which the scheduler signals permission to execute; unlabelled internal steps, which we understand as the execution of the algorithm that the component encapsulates. We observe, but do not formalise, here that well-formed descriptions must be labelled only from these alternatives any any path must strictly follow the cycle: input, readiness, scheduling, execution, output. The intuition for the example in Figure 7 is that it will compute on input from a ‘sound card’ device, introduced via the instance c1 of the Soundcard component and ultimately display the results via instances of the BarGraph component, as encapsulated within the Element component. The point of the Element component is to prepare the raw values for display. This is parameterised not only in the raw value, but also in the range by which to filter so that each instance will effect a different element in the overall spectrum. We see from its reactive type that Soundcard is always ready for execution and when allowed always produces a value and becomes ready again. Similarly as soon as BarGraph receives a value at its input port it becomes ready to execute at each cycle. To set the frequency range of each instance of the Element component, an instance of Const is used. This is initially ready to execute and provide such a value but thereafter never becomes ready again. This is a simple form of statefulness; a more advanced form is present in the behaviour of F ilter, which needs one value on each port to become ready but thereafter only accepts input on port b. Our intuition for this behaviour is that port a is a filter range and port b a signal to filter; being a digital filter, the functional behaviour is implemented using calculation over previous values (of b) and therefore the filtering parameters cannot be dynamically changed.
r!
e?
Quantise c!
c
a
a?
r!
e?
c
c!
Figure 1: Const
Figure 2: Quantise
Soundcard
Filter a
r!
a?
c
e? c!
b
r!
Threshold a?
a a?
r!
c!
Figure 4: Filter
BarGraph a
c
b?
b? a?
Figure 3: Soundcard
e?
e?
a? b?
b
r!
e?
c
b? a?
c! a?
Figure 5: BarGraph
Figure 6: Threshold
In order to understand how non-determinism can play a part in reactive types, we also include the type of a component T hreshold, unused in this simplified example. The implementation of this component will accept a threshold value at port a and a ‘signal’ value at port b; it will only propagate values received on b if they exceed the threshold. (In the full example developed for [20], the T hreshold component is used to create a peak level display within the spectrum analyser, new propagated values being fed back to this component via a loop in the graph.) There are two kinds of non-determinism present in this definition. Firstly, the component does not predetermine the number of values accepted at port a beyond its requirement that there is at least one before the first execution. Secondly, the output behaviour is not uniquely determined, since this depends of the value received, but presented as a non-deterministic choice between two alternatives. In order to understand the behaviour of the system formed from these components, it is necessary to establish the communication idiom and synchronisation idiom under which the components are composed. In Ptolemy the two together are called a computation model. In this presentation we shall consider how these idioms are present in the tool iConnect, since the particular notion of broadcast communication, as described in Section 2.2, is helpful.
s1:Soundcard
s2:Const
s3:Const
Example = system( insts(Soundcard, s1); insts(Const, s2); c1:Element insts(Const, s3); instc(Element, c1); instc(Element, c2); wire(s1, c, c1, a); c2:Element wire(s1, c, c2, a); wire(s2, c, c1, b); wire(s3, c, c2, b))
Figure 7: Example Application
a c1:Filter
b
Element = enc( inst(Filter, c1); Element instc(Quantise, c2); instc(BarGraph, c2:Quantise c3:BarGraph c3); iwire(a, c1, a); iwire(b, c1, b); wire(c1, c, c2, a); wire(c2, c, c3, a), {a, b}, {})
Figure 8: Example Encapsulation
2.1. Synchrony
2.2. Isochrony
The synchrony hypothesis embodies the principle that reactive systems should be considered as forming a complete reaction to environmental stimulus instantaneously, i.e. infinitely faster than the environment can react back [5]. In this way, system behaviour may be broken up into a discrete series of synchronous steps between which the environment is revisited. This principle is implemented in iConnect on two levels.
As well as generalising from pure dataflow with respect to actor behaviours — allowing statefulness and non-determinism — the dataflow-oriented designs we consider also generalise the communication mechanism between actors. In particular we allow ‘forks’, ‘joins’ and ‘loops’ in the graphs. ‘Joins’ behave in the manner referred to in the literature as non-deterministic merge, wherein the arrival of a value at either side of the join triggers a communication to the recipient and does not pre–determine or prejudice from which side the next value will be propagated. ‘Forks’, on the other hand, have a broadcast behaviour, meaning that all recipients will receive the value. In order to sensibly reason about designs, especially those involving loops, it is an established principle of hardware design — just as the synchrony hypothesis is an established principle of software design — that forks should behave in a manner consistent with the isochronic view. Isochronic forks not only broadcast one copy of each value arriving to each recipient, but communicate these values at the same time so that it is not possible for an earlier recipient to prejudice the handling of the data by another recipient. We shall call the principle that all forks in a design behave in this manner the isochrony hypothesis. What will prove particularly useful about this principle in our designs is that it allows local conditions on communication to be turned into global conditions on the (qualitative) timing of the system. The essence of our type checking will be that a system is well-defined in terms of synchronous instants of time, since these are defined in terms of faster isochronous instants this includes a check that every possible broadcast in the system behaviour is well-defined, i.e. would not cause a buffer-overflow.
Firstly, the provision of source components allows internalisation of environmental sampling into the component model. Source components are those without input ports, i.e. Soundcard and Const in the example, all others are called computation components and are involved in the computation of a synchronous reaction to data from source components. We can thus see how iConnect schedules systems in a synchronous manner by allowing instances of source components to be executed and thereafter executing all instances of computation components that become ‘ready’ (i.e. ready for execution) until the list of these is exhausted, at which point the source components are considered again. This continues in a cyclical manner. The second way in which the synchrony hypothesis may be used in the scheduling of dataflow–oriented systems concerns the mechanism for encapsulation. In order to be intelligible as components in their own right, encapsulated subsystems must behave in the same manner as core components from the framework. Importantly, their execution must be seen as atomic, i.e. uninterruptable. For any instance of such a component, we can see this as another form of synchrony whereby the dataflow graph into which the component is instantiated forms an environment while the component instances created according to its definition must form synchronous, i.e. instantaneous, reactions that may not be interrupted by that environment.
System ::= system(SourceSystem, CompSystem) SourceSystem ::= SourceInstance | SourceInstance; SourceSystem SourceInstance ::= insts(Source, name) Source ::= Const | Soundcard CompSystem ::= CompInstance | CompInstance; CompSystem | CompSystem; W ire CompInstance ::= instc(Comp, name) Comp ::= Filter | Quantise | BarGraph | enc(CompSystem, name, I, O) W ire ::= wire(name, port, name, port) | iwire(port, name, port) | owire(name, port, port) Table 1: Architectural Description Language
3. A Dataflow-Oriented ADL We now define an architecture description language for dataflow-oriented systems with the features laid out in the previous section. This is shown in Table 1. This establishes, as informally related in the previous section, that a given system composes a positive number of instances of source components, introduced via the insts keyword, with a positive number of computation components, introduced via the instc keyword, and some number of wires to connect them. A system of wires that connect outputs of named component instances to inputs of similar, formed with the wire keyword, may be re–interpreted, as mentioned previously, as forming a system of forks connected to each output and joins connected to each input. As well as the core components provided by the framework, the instc keyword may be applied to encapsulated subsystems of computation components formed with the enc keyword. There are also special wires, made available via the iwire and owire keywords, to expose inputs and outputs without naming their connected parties that are used to expose ports during encapsulation.
4. CaSE: Calculus for Synchrony and Encapsulation This section summarises our process calculus CaSE that is inspired by Hennessy and Regan’s TPL [12], which is an extension of Milner’s CCS [16] with regard to syntax and operational semantics. In addition to CCS, TPL includes (i) a single abstract clock σ, which is interpreted not quantitatively as some number encoding an exact time but qualitatively as a recurrent, global and abstract synchronisation event; (ii) a timeout operator bP cσ(Q) similar to ATP [17], where the occurrence of σ deactivates process P and activates Q; (iii) the concept of maximal progress [23], which imposes synchrony on a system by demanding that a clock can only tick within a process if the process cannot engage in any further internal activity τ . Our process calculus CaSE will further extend TPL by (i) allowing for multiple clocks σ, ρ, . . . as in PMC [2] and CSA [8] while, in contrast to PMC and CSA, maintaining the global interpretation of maximal progress; (ii) explicit ‘stalling’ operators ∆ and ∆σ that locally prohibit the ticking of all clocks and of clock σ, respectively, i.e. in a manner from which a context may recover; (iii) clock–hiding operators /σ such that all clock ticks of process P are internalised in process P/σ. It is worth mentioning here that, to the best of our knowledge, clock–hiding operators had not previously been investigated in the process–algebra community [3]. In the context of this paper, clock hiding is essential since it facilitates the modelling of encapsulation in datafloworiented designs. Finally, in contrast to TPL and similar to CCS and CSA, we will not equip CaSE with a testing–based [10] but a bisimulation–based semantic theory [16].
4.1. Syntax and Operational Semantics The syntax is defined, as shown in Table 4.1, with respect to a countable set of names, Λ, and a countable set of clocks, T . The semantics of processes — themselves named P and being the set of closed, guarded expressions — is then defined by a labelled transition system hP, A ∪ T , −→, P i according to the operational semantics in Table 2. As well as the usual CCS rules [16] for nondeterministic interleaving of actions (Act, Sum1, Sum2, Com1, Com2), for synchronisation on actions (Com3) — whereby an action named a can synchronise with a concurrent action on a complementary action a, producing a ‘silent action’ τ that can be the means of no further synchronisation — and for recursion and restriction (Rec, Res).
F a, a, b, b, · · · ∈ Λ ∪ Λ ρ, σ, · · · ∈ T L ⊆ Λ T ⊆ T α, β, · · · ∈ Λ ∪ Λ ∪ {τ } γ, δ · · · ∈ Λ ∪ Λ ∪ {τ } ∪ T E
::= 0 | ∆ | ∆σ | α.E | bEcσ(E) | E + E | E|E | µX.E | X | E \ a | E/σ Table 2: Core CaSE Syntax
Idle
− −
Act
σ
0→0
α
α.P → P
− −
Patient
− −
− −
Stall
σ
a.P → a.P
γ
τ
P 9
TO1
σ
Sum2
α
σ
σ
P → P0 α
P | Q → P0 | Q a
Q → Q0
Com3
α
σ
a
P → P 0 , Q → Q0 τ
P | Q → P 0 | Q0 τ
P → P 0 , Q → Q0 , P |Q 9 σ
P | Q → P 0 | Q0 γ
E{µX.E/X} → P 0 γ
γ
Res
µX.E → P 0
P/σ →
P 0 /σ
P → P0
2
γ
P \ a → P0 \ a γ
σ
P → P0 τ
α.E + ∆
aT .E
a.E + ∆T
σ.E
def
=
b∆cσ(E)
σ.E
def
=
µX.b∆c~ ρ(X)σ(E)
*, **
σ T .E bEc~σ (F~ )
def
=
µX.b∆c~ ρ(X)σ(E)
*, ***
def
=
bEcσ1 (F1 )σ2 (F2 ) · · · σn (Fn )
bEc~σ (F )
def
=
bEcσ1 (F )σ2 (F ) · · · σn (F )
E\L
def
=
E \ a 1 \ a2 · · · \ an
****
E/~σ
def
E/σ1 /σ2 · · · /σn
*****
=
=
∆σ
where: * X ∈ / f v(P ) ** ∀ρ ∈ T · ∃i · ρi = ρ ***∀ρ ∈ (T \ T ) · ∃i · ρi = ρ **** L = {a1 , a2 , · · · , an } ***** ~σ = σ1 · σ2 · · · · · σn
Table 4: Derived CaSE Syntax
α
P | Q → P | Q0
Hid1
=
def
σ∈T
α
Com1
α
Rec
α.E
def
=
def
Q → Q0
σ
P → P 0 , Q → Q0
σ
def
P + Q → Q0
P + Q → P 0 + Q0
Com4
P
∆T
and: bEcρ(F )σ(G) = bbEcρ(F )cσ(G) etc.
α
P → P0 P + Q → P0
Com2
3
γ
bP cσ(Q) → P 0
α
Sum3
P → P0
TO2
bP cσ(Q) → Q Sum1
1
ρ
∆σ → ∆ σ
::= 0 | ∆ | ∆T | γ.F | γ.F | γ T .F | ~ | bFc~σ (F ) | F + F | bFcσ(F ) | bFc~σ (F) F|F | µX.F | X | F \ L | F/~σ
Hid2
P → P0 γ
P/σ →
where: 1 ρ 6= σ, 2 γ 6= a, 3 γ 6= σ Table 3: CaSE Semantics
P 0 /σ
3
We adopt standard definitions [11] for deterministic time, measured in clock transitions and governed by maximal progress (Sum3, Com4, TO1, TO2, Idle, Patient), adding a shorthand σ.E defined in Table 4 which allows a stable ‘patient’ clock prefix to be formed within a known clock universe. It is natural to then define prefix as patient communication, i.e. allowing an unlimited number of transitions in each clock without being disturbed — a behaviour we name ‘idling’ — 0, as the empty sum of communications, becomes wholly ‘idle’. Our calculus is novel in introducing a ‘stalling’ process ∆ that is recoverable in context, i.e. to which clock transitions can be explicitly added with the time-out operator and which is stable and so can usefully be placed in a summation. These features — which condef trast with the process Ω ( = τ.Ω) in TPL [12], motivated within a testing–based theory, are similarly motivated by the bisimulation–based theory — allow the formation of insistent communication and insistent clocks shown as a.E and σ.E and defined with other useful short-hand notations in Table 4.
In a multiple clocks setting, it is also natural that we generalise the notion of ‘stalling’ to accommodate the space between 0 and ∆. We introduce a family of processes ∆σ , each of which will prevent a single clock from progressing. We can then define a shorthand for processes that will stop a subset of the clocks from progressing, ∆T . The use of these processes, and intuition necessary to understand them, will be discussed more in Section 4.3. The final novelty in our syntax is the addition of a ‘clock hiding’ operator. For the process P/σ, any σtransition permitted within P will become a silent action. This means both that no context may preempt this transition, so that maximal progress becomes a localised concept, nor synchronise with the action. Furthermore the silent action is preemptive over further clocks and there is therefore an implicit notion of priority. The intuition is that a hidden clock is a local and strictly finer notion of time than the remaining clocks.
4.2. Equivalences We follow PMC [2] and CSA [8] in defining an observational theory based on weak bisimulation. Definition 4.1 A symmetric relation R ⊆ P × P is a temporal weak bisimulation timed in C if, for every hP, Qi ∈ R: γ
γ ˆ
γ
γ ˆ
1. P → P 0 , γ ∈ Λ ∪ Λ ∪ {τ } ∪ C implies ∃Q0 . Q ⇒ Q0 and hP 0 , Q0 i ∈ R 2. Q → Q0 , γ ∈ Λ ∪ Λ ∪ {τ } ∪ C implies ∃P 0 . P ⇒ P 0 and hP 0 , Q0 i ∈ R We write P ≈C Q if hP, Qi ∈ R for some temporal weak bisimulation R timed in C. As in work on interface automata [9], we should like a notion of refinement/abstraction that represents the different obligations with respect to declaring inputs versus outputs and similarly adapt the notion of alternating simulation [1]. Definition 4.2 A relation R ⊆ P × P is an alternating simulation timed in C if, for every hP, Qi ∈ R: a
a
a
a
1. P → P 0 , a ∈ Λ implies ∃Q0 . Q ⇒ Q0 ∧ hP 0 , Q0 i ∈ R 2. Q → Q0 , a ∈ Λ implies ∃P 0 . P ⇒ P 0 ∧ hP 0 , Q0 i ∈ R τ
τ
3. P → P 0 implies ∃Q0 . Q ⇒ Q0 ∧ hP 0 , Q0 i ∈ R 4. Q → Q0 implies ∃P 0 . P ⇒ P 0 ∧ hP 0 , Q0 i ∈ R σ
σ
5. P → P 0 , σ ∈ C implies ∃Q0 . Q ⇒ Q0 ∧hP 0 , Q0 i ∈ R σ
σ
6. Q → Q0 , σ ∈ C implies ∃P 0 . P ⇒ P 0 ∧ hP 0 , Q0 i ∈ R. We write P 4C Q, and say P refines Q, if hP, Qi ∈ R for some alternating simulation timed in C, R.
4.3. Transition Diagrams In order to diagram the semantics of processes, which may have infinite behaviours, we concentrate on those systems that have a finite number of states, up to a notion of structural congruence, and draw transition graphs, identifying the states that represent structurally congruent processes, as in CCS [16]. A further notational mechanism is needed to diagram systems that define behaviour within an infinite, or extensible, clock universe. Whereas finite CCS processes give rise to finite transition graphs, the ‘idling’ capability of timed processes with patient communications gives rise to transitions in any clock that a context may provide. We therefore must show such transitions symbolically. As shown in Figure 10, we graphically represent two general types of state. In the leftmost state, drawn with a doubled circle, all clocks not otherwise shown label self-transitions (i.e. loops) on this state. In the other states, however, which are shown as a solid circle, all clocks not otherwise shown have no transitions. The idea behind these conventions is that the solid state is an instantaneous state in which time cannot progress, whereas the other state can be occupied for an amount of time measured in the visible clocks. We can therefore see that the cn is a patient communication while am is an insistent one and σcn an insistent clock. This is represented directly in the syntax of this process: µX.cn .am .σcn .X. Between these two extremes is the possibility that a state may be occupied for a length of time measured on some clocks but not others. In this case we explicitly label the state with those clocks that are stopped, as in the leftmost state in Figure 9. Some clocks in our model may be understood — as was the emphasis in PMC [2] and CSA [8] — in terms of the ‘system clocks’ in synchronous circuit design, where the interpretation of maximal progress is that the internal (and asynchronous) behaviour of a circuit is effectively infinitely faster than the clock which ticks at regular intervals. This is the case, as we shall see in the next section, with the clock governing the synchronous behaviour of systems. Other clocks, however, are more naturally understood by analogy with the chronograph in a wristwatch in the sense that they can be turned on and off. When switched on they can measure a series of events, as can the lap counter of a chronograph, that occur near-infinitely faster than the instants of time measured by a coarser clock (in this case the synchronous clock, which, extending the analogy, can be compared with the hour chime of the wristwatch).
cn
am
c {σcn }
cn σ cn
σ cn
Figure 9: Broadcast Agent
Figure 10: Wire Agent
5. Reactive Types In the type systems for certain lambda calculi — and as a result, those also of certain functional programming languages — where computation is based on reduction, well-typedness guarantees termination (i.e. strong normalisation). The type system can be seen as checking that the ‘pieces’ of a system are ‘plugged together’ correctly so that when dynamically a function is applied to its arguments, the arguments are as expected by that function and progress will be made towards termination without error. Conversely, the property we should desire of a type system for a reactive system is that it is ‘plugged together’ in such a way that, when executed, communication between component instances will similarly proceed as expected by both parties and the system will continue to react to its environment without error. To understand how our behavioural type system can capture the property of reactivity (in fact, a notion of ‘non-termination’), it is necessary to consider how the two processes in Figures 9 and 10, and discussed in the previous section, interact in composition. The situation we shall construct places one instance of the broadcast agent in parallel composition with some number of instances of the wire agent. The intention is to construct an isochronic fork broadcasting a communication on c from a component instance named n via a channel cn and to a family of receivers named m ~ on ports ~a via the channels represented for each by am . This must be achieved in a compositional manner, i.e. open to further composition of wire agents, and we do so via the clock σsn , unique to this channel for broadcasting; a formal proof regarding this was presented in [21]. Until the broadcast agent receives a communication on channel c, the composition is patient in all clocks except σcn . Once it has received this communication it will continue to communicate copies until the clock ticks. The clock will not tick while there is still a party willing to communicate on this channel so it is guaranteed that all wire agents will be able to receive the value within the instant of time being measured. Similarly, no wire agent can pick up more than one copy, since each must synchronise on the clock before it is again receptive and the broadcast agent will not synchronise on the clock until it is finished broadcasting.
(i = e, j = c)
{σn } r
{σn }
(i = c, j = e)
{σn }
re {σn }
r
ti
e
ti
σn
ti
σ tj
Figure 11: Distributed Scheduling Agent The final important point about these agents is that each wire agent is insistent in communicating on the value to the receiver component instance. This means that no clock will tick until each value can be passed on and, since this is on a channel uniquely named for the receiver instance (that will be restricted away once this has been composed), if there is any state where the producer may produce a value and one of the receivers not be input-enabled this state will become time-locked in the system model. Since its definition is sensitive to clock transitions, observation equivalence can thus be used to diagnose this condition. The overall structure of the type system concerns checking untimed processes representing ground and assigned types (constructed via the enc rule) against timed processes representing the composition of instances of such. To enforce this, all assigned types are required to be expressed in the sequential and strongly-guarded (i.e. guarded and without τ -loops) CCS fragment of CaSE (by inspection of Table 5 we see that ground types also fulfil this condition) . Within this fragment, when we introduce timing ‘behind the scenes’, the behaviours may not be time-locked (including only finite paths through τ -transitions), checking against an assigned type therefore implies freedom from communication flaws. Table 8 shows how types are formed for the instantiation of source components, in the typing judgement for the insts keyword, by the inclusion of a broadcast agent for each output (the parallel component starting Π, which is a distributed parallel composition) and also of an agent following the pattern shown in Figure 11 (starting from top branch). This agent distributes the scheduling responsibilities, which we have called the synchronisation idiom, down to the component instances in a compositional manner, as proven in [21]. Essentially a token passing game is played, guaranteeing serialised execution, between peers (i.e. source components with source components) so that a source component may only be executed if the scheduling agent associated with it holds the te token. The agent will then pass on this token to another source component unless the clock σn ticks signalling the end of the phase, at which point the tc token is passed to a computation component. Maximal progress ensures this clock will not tick if any other source component is ready.
The same game is played, with the tokens in opposite roles (i.e. the lower branch of Figure 11), between the computation components thus ensuring synchronous system behaviour. There are slight variations between this agent and the one for source components since the latter must not allow a second execution of the component instance within one cycle, whereas this is explicitly allowed for computation components (loops being a valid design pattern), and since the agent for computation components must delegate up the request for execution, via re , for the purposes of encapsulation. Finally, the behaviour of an encapsulated graph depends on whether it is to form a complete application or a new component. In the former case, and where the source subsystem has a reactive type represented by P and the computation subsystem Q, the behaviour will be as per (P | Q | te .0), i.e. we simply supply the initial token and thereafter the tokens te and tc are exchanged by the distributed scheduling components. In order to be well-typed, however, as shown in Table 9 — 0 being the only valid type to assign a closed system — we also check that, having restricted away all communication channels and hidden their clocks, this behaviour is equivalent to 0, which admits an unbounded number of σs transitions and which cannot therefore be equated with a system with communication flaws causing time-stops. Where we are creating a new component, we have first to make the opposite transformation to instantiation — i.e. signalling readiness for execution as a component and only then inserting a token to be passed around. Thereafter a subtype check is made with respect to an assigned type Q, which being in the strongly-guarded sequential CCS fragment must be live in all clocks. As well as checking for internal communication flaws and allowing us to abstract away internal states — both simplifying the type and going some way to avoiding the state explosion problem — as we had previously proposed with observation equivalence [19] — the use of the timed alternating simulation preorder as subtyping allows paths, initiated by inputs for which the encapsulated subsystem is not really ready, and which could therefore result in internal communication flaws, to be removed. By removing such inputs this has the effect of externalising the avoidance requirement, i.e. imposing it on the environment. If a context, i.e. application into which the new component is composed, type checks under the assigned type then we know, from the resulting well-timedness that the input is never supplied. If we then substitute in the actual behaviour then we have the same guarantee and the flaw is avoided.
6. Related Work Behavioural types in Ptolemy are based on the formalism of interface automata [9] and employed for checking the so–called compatibility property between components [7]. However, interface automata are not expressive enough to reason about consistency1 , which Ptolemy, for the restricted class of synchronous dataflow (SDF) models, handles by linear–algebra techniques [14]. Statefulness and non-determinism are only dealt with by Ptolemy’s consistency check within the restrictions of the models named ‘cyclo-static dataflow’ and ‘binary dataflow’ wherein a statistical assumption is made about the dynamic coverage of each state or non-deterministic choice [6]. In contrast the theory in CaSE is more general than SDF and lends itself to compositionally checking consistency in dataflow-oriented designs without such restrictions. Furthermore, note that in order to model Ptolemy the non-standard extension of ‘transient states’ has had to be introduced to interface automata [15]. Their motivation and effect may be compared with the difference between instantaneous and idling states in this work. On the other hand whereas the current work has used the principle of maximal progress to accommodate these features through a compositional notion of qualitative time, ‘transient states’ have yet to be reconciled with the theoretical results of interface automata.
7. Conclusions and Future Work This paper presented a system of behavioural types that clearly generalises an important technique, static validation of consistency, to the more general setting of a tool whose use is well established within the engineering community. As such we have both met the challenge set out informally for Reactive Types in [19] and further motivated our process calculus, CaSE [21]. While it might be argued that a type system should allow abstract properties of terms to be composed purely syntactically this is not clearly possible with the rich behaviours we must consider and so we follow several previous approaches from the literature in nevertheless still referring to our approach as providing a behavioural type system [18] [7] [22] [4]. Our future work will involve formalising and proving the intuitive properties of alternating simulation. This will extend the theory of CaSE and allow further bridges to be built with the Ptolemy community and exchange of ideas with work on interface automata. 1
We note that the definition of ‘consistency’ dealt with in [13] is different and weaker, though alternating simulation is also used.
References [1] R. Alur, T. A. Henzinger, O. Kupferman, and M. Y. Vardi. Alternating refinement relations. In Proc. 9th Intl. Conference on Concurreny Theory (CONCUR ’98), number 1466 in LNCS. Springer-Verlag, 1998. [2] H. Andersen and M. Mendler. An asynchronous process algebra with multiple clocks. In Proc. 5th European Symposium on Programming (ESOP ’94), volume 788 of LNCS, pages 58–73. Springer-Verlag, 1994. [3] J. Bergstra, A. Ponse, and S. Smolka, editors. Handbook of Process Algebra. Elsevier Science, 2002. [4] M. Bernardo, P. Ciancarini, and L. Donatiello. On the formalisation of architectural types with process algebras. In Proc. 8th Intl. Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS 2000), pages 140–148. ACM Press, 2000. [5] G. Berry and G. Gonthier. The Esterel synchronous programming language: Design, semantics, implementation. Science of Computer Programming, 19(2):87– 152, 1992. [6] S. Bhattacharyya. Software synthesis and code generation for signal processing systems. IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, 47(9):849–875, September 2000. [7] A. Chakrabarti, L. de Alfaro, T. Henzinger, M. Jurdzinski, and F. Mang. Interface compatibility checking for software modules. In Proc. 14th International Conference on Computer Aided Verification (CAV ’02), volume 2404 of LNCS, pages 428–441, 2002. [8] R. Cleaveland, G. L¨ uttgen, and M. Mendler. An algebraic theory of multiple clocks. In 8th Intl. Conference on Concurreny Theory (CONCUR ’97), volume 1243 of LNCS, pages 166–180. Springer-Verlag, 1997. [9] L. de Alfaro and T. Henzinger. Interface automata. In Proc. 8th European Soft. Eng. Conference and 9th ACM SIGSOFT International Symposium on Foundations of Soft. Eng. (ESEC/FSE 2001), volume 26, 5 of Software Engineering Notes, pages 109–120. ACM Press, 2001. [10] R. DeNicola and M. Hennessy. Testing equivalences for processes. Theoretical Computer Science, 34:83–133, 1983. [11] M. Hennessy. Timed process algberas: A tutorial. Technical Report 1993:02, COGS, University of Sussex, 1993. [12] M. Hennessy and T. Regan. A process algebra for timed systems. Information and Computation, 117(2):221– 239, March 1995. [13] Y. Jin, R. Esser, C. Lakos, and J. W. Janneck. Modular analysis of dataflow process networks. In Proc. 6th Intl. Conference on Fundamental Approaches to Software Engineering (FASE ’03), number 2621 in LNCS, pages 184–199. Springer-Verlag, 2003. [14] E. Lee. Consistency in dataflow graphs. IEEE Transactions on Parallel and Distributed Systems, 2(2):223–235, April 1991.
[15] E. A. Lee and Y. Xiong. A behavioral type system and its application in ptolemy ii. Formal Aspects of Computing, 2004. [16] R. Milner. Communication and Concurrency. Prentice Hall, 1989. [17] X. Nicollin and J. Sifakis. The algebra of timed processes, ATP: Theory and application. Information and Computation, 114:131–178, 1994. [18] O. Nierstrasz. Regular types for active objects. In Proc. 8th Conference on Object-Oriented Programming Systems, Languages, and Applications (OOPSLA ’93), ACM SIGPLAN Notices, pages 1–15, 1993. [19] B. Norton. Reactive types for dataflow-oriented component-based development. In Proc. 2nd Workshop on Automated Verification of Critical Systems (AVoCS ’02), volume CSR-02-6 of Technical Report Series. University of Birmingham, 2002. [20] B. Norton. Reactive Types for Component-Based Development. PhD thesis, Department of Computer Science, University of Sheffield, 2004. In progress. [21] B. Norton, G. L¨ uttgen, and M. Mendler. A compositional semantic theory for synchronous componentbased design. In 14th Intl. Conference on Concurreny Theory (CONCUR ’03), number 2761 in LNCS. Springer-Verlag, 2003. [22] F. Puntigam. Type specifications with processes. In Proc. 8th International IFIP Conference on Formal Description Techniques for Distributed Systems and Communications Protocols (FORTE ’95), volume 43 of IFIP Conference Proc. Chapman & Hall, 1995. [23] W. Yi. CCS + time = An interleaving model for real time systems. In 18th International Colloquium on Automata, Languages and Programming (ICALP ’91), volume 510 of LNCS, pages 217–228, 1991.
Acknowledgements We should like to acknowledge the input of Prof. Michael Mendler into this work, the interest and feedback from Micro-Epsilon GmbH, who produce the tool iCONNECT, and the support of Prof. Fabio Ciravegna, Principal Investigator of the the Dot.Kom project, and Prof. Mahesan Niranjan, the Head of Department of Computer Science at Sheffield, for their continued support of this work.
− − σ, {}, {c} ` Const : r.e.τ.c.0
− − σ, {}, {c} ` Soundcard : µX.r.e.τ.c.X
− − σ, {a, b}, {c} ` Filter : a.b.r.e.τ.c.µX.b.r.e.τ.c.X+ b.a.r.e.τ.c.µX.b.r.e.τ.c.X
− − σ, {a}, {c} ` Quantise : µX.a.r.e.τ.c.X
− − σ, {a}, {} ` BarGraph : µX.a.r.e.τ.X Table 5: Ground Types − − σ, {}, {} ` wire(n, c, m, a) : µX.cn .am .σcn .X − − σ, {}, {} ` iwire(a, m, b) : µX.a.bm .σa .X
σs , I P , O P ` S : P I P ∩ IQ = ∅ σs , I Q , O Q ` T : Q O P ∩ O Q = ∅ σs , IP ∪ IQ , OP ∪ OQ ` S; T : P |Q
− − σ, {}, {} ` owire(n, c, d) : µX.cn .d.σcn .X
Table 6: Composition Type
Table 7: Wire Types
σ, {}, O ` C : P σ, {}, {cn | c ∈ O} ` insts(C, n) : (P | Πc∈O µX.cσcn . µY.bcn .Y cσcn (X) | µX.r.te σn .e.σn .bte .σ.Xcσ(tc .X)) ) \ (O ∪ {r, e}) / σn σ, I, O ` C : P σ, {an | a ∈ I}, {cn | c ∈ O} ` instc(C, n) : (P {a → an | a ∈ I} | Πc∈O µX.cσcn . µY.bcn .Y cσcn (X) | µX.r σn .(re .tc σn .e.σn .btc .Xcσ(te .X) + tc σn .e.σn .btc .Xcσ(te .X)) ) \ (O ∪ {r, e}) / σn Table 8: Instantiation Types σs , {}, O1 ` S : P
σn , I 1 , O 1 ` C : P
σs , I, O2 ` C : Q (P | Q | te .0) \ (I ∪ O1 ∪ O2 )/{σo | o ∈ O1 ∪ O2 } ≈σs 0 σs , {}, {} ` system(S, C) : 0 (P | µX.re .r.e.tc
σn
.σn .X) \ (I1 ∪ O1 ∪ {re , tc })/{σo | o ∈ O1 }/σn 4σs Q
σs , I2 , O2 ` enc(C, n, I2 , O2 ) : Q Table 9: Encapsulation Types