Asynchronous and Deterministic Objects - Semantic Scholar

7 downloads 43511 Views 208KB Size Report
The formal definition of an imperative and asynchronous ob- ject calculus with futures (ASP). • Parallel programming as a smooth extension of sequential objects ...
Asynchronous and Deterministic Objects Denis Caromel Ludovic Henrio Bernard Paul Serpette INRIA Sophia-Antipolis - CNRS - I3S - Univ. Nice Sophia Antipolis, 2004 route des Lucioles – B.P. 93 F-06902 Sophia-Antipolis Cedex

{caromel, henrio, serpette}@sophia.inria.fr

Abstract This paper aims at providing confluence and determinism properties in concurrent processes, more specifically within the paradigm of object-oriented systems. Such results should allow one to program parallel and distributed applications that behave in a deterministic manner, even if they are distributed over local or wide area networks. For that purpose, an object calculus is proposed. Its key characteristics are asynchronous communications with futures, and sequential execution within each process. While most of previous works exhibit confluence properties only on specific programs – or patterns of programs, a general condition for confluence is presented here. It is further put in practice to show the deterministic behavior of a typical example. Categories and Subject Descriptors: D.1.3: Concurrent Programming F.3.2: Semantics of Programming Languages General Terms: Languages Keywords: Object calculus, concurrency, distribution, parallelism, object-oriented languages, determinism, futures.

1 Introduction Confluence properties alleviate the programmer from studying the interleaving of concurrent instructions and communications. Very different works have been performed to ensure confluence of calculus, languages, or programs. Linear channels in π-calculus [30, 20], non interference properties [29] or atomic type systems [8] in shared memory systems are typical examples. Starting from deterministic calculi, Process Networks [17], or Jones’ technique in πoβλ [16] create deterministic concurrency. But none of them concerns a concurrent, imperative, object language with asynchronous communications.

In this paper, we propose a calculus where interference between processes are clearly identifiable thus simplifying reasoning about concurrent object-oriented programs.Our confluence property has a much more general goal: it identifies the sources of nondeterminism and provides a minimal characterization of program behavior. Furthermore, some programs must behave deterministically: one could not imagine an undeterministic result to a binary or a prime number search but only a few works ensure such results. Seeking determinism for parallel programming, we propose a calculus in which such properties can be verified either dynamically or by static analysis. A first contribution of this work lies in the design of an appropriate concurrent object calculus (ASP, Asynchronous Sequential Processes). From a more practical point of view, we aim at a calculus model that is effective for parallel and distributed computations, both on local and wide area networks. Asynchronous communication is at the root of the calculus (for the sake of decoupling processes and network latency hiding). In ASP some objects are active, active objects are accessible through global (remote) references. Communications are performed through asynchronous method calls called requests: the calling object sends a method call to an active object but does not wait for the result. Instead the request sender obtains a future representing the result that will be calculated. The result will be updated when it will be available. Inside each activity, execution is sequential: only one thread performs instructions. ASP is based on a purely sequential and classical object calculus: impς-calculus [1] extended with two parallel constructors: Active and Serve. Active turns a standard object into an active one, executing in parallel and serving requests in the order specified by the Serve operator. Automatic synchronization of processes comes from a data-driven synchronization mechanism called waitby-necessity [5]: a wait automatically occurs upon a strict operation (like a method call) on a communication result not yet available (a future). The association of systematic asynchronous communications towards processes, wait-by-necessity, and automatic deepcopy of parameters provides a smooth transition from sequential to concurrent computations. An important feature of ASP is that futures can be passed between processes, both as method parameters and as method results. The main contributions of this paper are:

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. POPL’04, January 14–16, 2004, Venice, Italy. Copyright 2004 ACM 1-58113-729-X/04/0001 ...$5.00

• The formal definition of an imperative and asynchronous object calculus with futures (ASP). • Parallel programming as a smooth extension of sequential objects, mainly due to data-flow synchronizations (wait-by-

necessity) and pervasive futures with concurrent out-of-order updates. • The characterization of sufficient conditions for deterministic behavior in such a highly asynchronous setting. On the practical side, it represents the formalization of an existing library that takes into account the practical constraint of asynchrony in wide-area networks; the ASP model is implemented as an open-source Java library (ProActive [7]), allowing parallel and distributed programming. Section 2 presents the ASP calculus, it starts with a sequential part based on the impς-calculus; then ASP calculus and its principles are presented; the example of a parallel binary tree illustrates the calculus. Section 3 presents the semantics of ASP, and Section 4 its main properties including confluence. ASP is compared with other calculi in section 5.

ASP sequential calculus is very similar to imperative ς-calculus [1], [11]. Note that a few characteristics have been changed between impς-calculus and ASP sequential calculus: • Because arguments passed to active objects methods will play a particular role, we added a parameter to every method like in [23]: in addition to the self argument of methods (noted x j ), a parameter can be sent to the method (y j in our syntax). • We do not include the method update in our calculus because we do not find it necessary and it is possible to express updatable methods in our calculus anyway. Note that method update could be included in our calculus anyway. • As in [11], in order to simplify the semantics locations (reference to objects in a store) can appear in terms. a, b ∈ L ::= x variable, | [li = bi ; m j = ς(x j , y j )a j ]i∈1..n j∈1..m object, | a.li field access, | a.m j (b) method call, | a.li := b field update, | clone(a) superficial copy, |ι location (not in source). b1

a; b2

Note that let x = a in and sequence can be easily expressed in our calculus and will be used in the following. Lambda expressions, and methods with zero and more than one argument are also easy to encode and will also be used in this paper.

Semantic structures Let locs(a) be the set of locations occurring in a and f v(a) the set of variables occurring free in a. The source terms (initial expressions) / without any location (locs(a) = 0). / are closed terms ( f v(a) = 0) Locations appear when objects are put in the store. The substitution of b by c in a is written: a{{b ← c}}. θi will denote substitutions. Let ≡ be the equality modulo renaming of locations (substitution of locations by locations) provided the renaming is injective (alphaconversion of locations).

,

x = a in b [m = ς(z, x)b].m(a) [m = ς(z, x)b].m(a)

,

Let o ::= [li = ιi ; m j = ς(x j , y j )a j ]i∈1..n j∈1..m be a reduced object. Let dom(σ) be the set of locations defined by σ. Let σ :: σ′ append two stores with disjoint locations. σ + σ′ is defined by (σ + σ′ )(ι) = σ(ι) σ′ (ι)

if ι ∈ dom(σ) otherwise

Like in [11] reduction contexts are expressions with a single hole (•) that specifies the order of reduction. For example, objects are reduced by a left to right evaluation of field. Reduction contexts are defined in Table 1

(R [o], σ) →S (R [ι], {ι → o} :: σ)

2.1 Sequential calculus

2 a; b

σ ::= {ι → [li = ιi ; m j = ς(x j , y j )a j ]i∈1..n j∈1..m }

ι 6∈ dom(σ)

2 Calculus

1 let

The store is a mapping from locations to objects where all fields are reduced:

σ(ι) = [li = ιi ; m j = ς(x j , y j )a j ]i∈I j∈J

(STOREALLOC) k ∈ 1..n

(R [ι.lk ], σ) →S (R [ιk ], σ) σ(ι) = [li = ιi ; m j = ς(x j , y j )a j ]i∈1..n j∈1..m

k ∈ 1..m

(R [ι.mk (ι′ )], σ) →S (R [ak { xk ← ι, yk ← ι′ }}], σ)

(FIELD)

(INVOKE)

σ(ι) = [li = ιi ; m j = ς(x j , y j )a j ]i∈1..n k ∈ 1..n j∈1..m  i∈1..k−1,k′ ∈k+1..n ′ ′ ′ li = ι i ; lk = ι ; lk = ι k ; o′ = (UPDATE) m j = ς(x j , y j )a j j∈1..m (R [ι.lk := ι′ ], σ) →S (R [ι], {ι → o′ } + σ) ι′ 6∈ dom(σ) (R [clone(ι)], σ) →S (R [ι′ ], {ι′ → σ(ι)} :: σ)

(CLONE)

R ::= • | R .li | R .m j (b) | ι.m j (R )| R .li := b | ι.l := R | clone(R )|  i∈[1...k−1],k′ ∈[k+1...n] li = ι i ; lk = R ; lk′ = bk′ ; m j = ς(x j , y j )a j

j∈1..m

Table 1. Sequential reduction We define a small step substitution-based operational semantics for ASP sequential calculus (Table 1); it is similar to the one defined in [11]. It defines new object creation ( STOREALLOC), field access (FIELD), method invocation (INVOKE), field update (UPDATE) and shallow clone (CLONE ).

2.2 Parallel calculus An active object is an object that can be referenced by distant pointers and can handle distant asynchronous method calls (requests). Informally, an activity is formed by a single active object, some passive (non-active) objects, an execution thread (called process) and an environment. When a request is received by an activity it is stored in a request queue. Later on, this request will be served and when the result

Legend: Future values Activity

Current term

Active object

Active object reference

Passive object

Local reference

Current request

Pending requests

Request parameter Future reference

α

f oo

Request on method f oo

The store σβ

Active object

β Reference to an active object

f

f2 Request parameter

f3

current term

Future value

Future to a pending request

f oo

aβ Future corresponding to the current term

Future

Figure 1. Example of a parallel configuration

will be calculated, it will be stored in a future values list. Pending requests denote requests inside a request queue, current requests are being served and served requests (requests whose service is finished) have a result value. In ASP, each activity has a single process and a single active object. Processes of different activities execute instructions concurrently, and interact only through requests. When activity α sends a request to activity β, β stores it in its pending request queue and α continues its execution; in α a future will represent the result of this request until it is calculated and returned to α (updated). ASP activities do not share memory. Moreover, synchronization is only due to waitby-necessity on a future. Indeed, a future reference is not sufficient to perform strict operations on an object (e.g. a field access or a method call). Thus, a strict operation on a future is blocked until the value associated to this future has been updated. ASP syntax is extended in order to introduce parallelism. The Active operator creates a new activity by activating object a. Serve allows to specify which requests should be served. ⇑ is used to remember the continuation of the current request while we serve another one; it should not be present in source programs. a, b ∈ L ::= ... |Active(a, s) |Serve(M) |a ⇑ f , b

Creates an activity s is either a service method m j or ø for a FIFO service Specifies request to serve, a with continuation b (not in source)

Where M is a set of method labels used to specify which request has to be served. M = m1 , . . . , mn

2.3 Informal semantics Figure 1 gives a representation of a configuration consisting of two activities. In every activity α, a current term aα represents the current computation. Every activity has its own store σα which contains one active and many passive objects. An activity consists of a process, a store, several pending requests and calculated replies (results of requests). It is able to handle requests coming from other activities. The store contains a unique active object and passive objects. Every object belongs to only one activity (no shared memory). Passive objects are only referenced by objects belonging to the same activity but passive objects can reference active objects. The Active operator (Active(a, m j )) creates a new activity α with the object a at his root. The object a is copied as well as all its dependencies3 (deep copy) in a new activity. AO(α) acts as a proxy for the active object of activity α. All subsequent calls to methods of a via AO(α) are considered as remote request sending to the active object of activity α. The second argument to the Active operator is the name of the method4 which will be called as soon as the object is activated. If no service method is specified, a FIFO service will be performed. Communications between activities are due to method calls on active objects and returns of corresponding results. A method call on an active object (Active(o). f oo()) consists in atomically adding an entry to pending requests of callee, and associating a future to the response. In practice, the request sender waits for an acknowledgment before continuing its execution. In Figure 1, futures f 2 and 3 to

prevent distant references to passive objects no argument

4 with

f3 denote pointers to not yet computed requests while f is a future pointing to a value computed by a request sent to β. Arguments of requests and value of futures are deeply copied3 when they are transmitted between activities. Active objects and futures are transmitted with a reference semantics. The primitive Serve can appear at any point in source code. Its execution stops the activity until a request matching its arguments is found in the requests queue. The first matching request is then executed (served). Futures are generalized references that can be manipulated classically while we do not perform strict operations on the object they represent. Futures can be transmitted to other activities and several objects can reference them. But, upon a strict operation (field or method access, field update, clone) on a future, the execution is stopped until the value of the future has been updated (wait-bynecessity). When a request is treated, the corresponding result (future value) becomes available; each activity stores the associations between future and its computed value. The moment where the value of a future is returned is not specified in our calculus. From a theoretical point of view, every reference to a future can be replaced by a copy3 of the future value (partial or complete) at any time. In Figure 1, the pending requests are merged with the future list (indeed futures correspond to previously executed requests).

2.4 Example Figure 2 shows an example of a simple parallel binary tree with two methods: add and search. Each node can be turned into an active object. The calculus allows us to express lambda expressions, integers and comparisons (Church integers for example), booleans and conditional expressions, methods zero or with many parameters, and the definition of classes. All these definitions can be easily expressed in ASP and most of them have been previously defined on ς-calculus. BT

, [new = ς(c,z)[ empty = true,l f t = [],rgt = [],

key = 0,val = [], search = ς(s,k)(c.search s k), add = ς(s,k,v)(c.add s k v)], search = ς(c,z)λs k.i f (s.empty) then [] else i f (s.key == k) then s.val else i f (s.key > k) then s.l f t.search(k) else s.rgt.search(k), add = ς(c,z)λs k v.i f (s.empty) then (s.rgt := Factory(s); s.l f t := Factory(s);s.val := v; s.key := k;s.empty := f alse; s) else i f (s.key > k) then s.l f t.add(k,v) else i f (s.key < k) then s.rgt.add(k,v) else (s.val := v; s) ]

Factory(s) Factory(s)

, s.new in the sequential case and , Active(s.new, ø) for the concurrent BT. Figure 2. Example: a binary tree

add stores a new key at the appropriate place and creates two empty nodes. Note that in the concurrent case, nodes are activated as soon as they are created.

search searches a key in the tree and returns the value associated with it or an empty object if the key is not found. new is the method invoked to create a new node. We parameterize the example by a factory able to create a sequential (sequential binary tree) or an active (parallel binary tree) node. In the case of the parallel factory, the term let tree = (BT.new).add(3,4).add(2,3).add(5, 6).add(7, 8)in [a = tree.search(5),b = tree.search(3)].b := tree.search(7)

creates a new binary tree, puts in parallel four values in it and searches two in parallel. Then it searches another value and modifies the field b. It always reduces to : [a = 6, b = 8]. Note that as soon as a request is delegated to another node, a new one can be handled. Moreover, when the root of the tree is the only node reachable by only one activity, the result of concurrent calls is deterministic (cf section 4).

3 Parallel semantics There are three distinct name spaces: activities (α, β, γ ∈ Act), locations (ι) and futures ( f i ) (in addition to the field, methods and variables identifiers which already appear in the source code and are not created dynamically). Note that locations and future identifiers fi are local to an activity. A future is characterized by its identifier fi , the source activity α and the destination activity β of α→β the corresponding request ( f i ). A parallel configuration is a set of activities: P, Q ::= α[a; σ; ι; F; R; f ]kβ[. . .]k . . . characterized by : γ→α

, b′ (the process). • current term to be reduced : a = b ⇑ f i a contains several terms corresponding to the requests being treated, separated by ⇑. The left part b is the term currently evalγ→α ′ uated, the right one f i , b is the continuation: future and term corresponding to a request that has been stopped before the end of its execution (because of a Serve primitive). Of course, b′ can also contain continuations; • store σ containing all objects of the activity α; • active object location ι is the location of the active object of activity α (master object of the activity); • future values, a list associating, for each served request, the location of its calculated result f : F = { f → ι}; γ→α ]}, a list of pending requests; • request queue R = {[m j ; ι; fi • current future f , the future associated with the term currently evaluated. A request can be seen as the “reification” of a method call(see for α→β example [28]). Each request r ::= [m j ; ι; fi ] consists of the name of the target method (m j ), the location of the argument passed to the request (ι) and the future identifier which will be associated to the α→β response to this request ( f i ). A reference to the active object of activity α is denoted by AO(α) α→β α→β and a reference to future f i by f ut( f i ). Due to distant pointers, the store codomain is extended with Generalized references (i.e. futures and active objects references). Reduced objects become: α→β

o ::= [li = ιi ; m j = ς(x j , y j )a j ]i∈1..n j∈1..m |AO(α)| f ut( f i

)

(a, σ) →S (a′ , σ′ )

→S does not clone a future

α[a; σ; ι; F; R; f ] k P −→ α[a′ ; σ′ ; ι; F; R; f ] k P

α[R

(LOCAL)

γ fresh activity ι′ 6∈ dom(σ) σ′ = {ι′ 7→ AO(γ)} :: σ  ′′ σγ = copy(ι , σ) Service = if (m j = ø) then Fi f oService else ι′′ .m j ()

[Active(ι′′ , m

j )]; σ; ι; F; R;

σα (ι) = AO(β) σ′β

f ] k P −→ α[R

ι′′ 6∈ dom(σβ )

= Copy&Merge(σα

, ι′

; σβ

[ι′ ]; σ′ ; ι; F; R; α→β

fi

, ι′′ )

σ′α

f ] k γ[Service; σγ

/ 0; / 0] / ; ι′′ ; 0;

ι f 6∈ dom(σα ) α→β f ut( f i )} :: σα

= {ι f 7→

mj ∈ M

(REQUEST)

kP

∀m ∈ M, m ∈ / R′

α[R [Serve(M)]; σ; ι; F; R; f ] k P −→ α[ι.m j (ιr ) ⇑ f , R [[]]; σ; ι; F; R′ :: R′′ ; f ′ ] k P ι′ 6∈ dom(σ)

F ′ = F :: { f 7→ ι′ }

σ′ = Copy&Merge(σ, ι ; σ, ι′ )

α[ι ⇑ f ′ , a; σ; ι; F; R; f ] k P −→ α[a; σ′ ; ι; F ′ ; R; f ′ ] k P γ→β

σα (ι) = f ut( f i

)

γ→β

Fβ ( fi

) = ιf

kP

new future

α[R [ι.m j (ι′ )]; σα ; ια ; Fα ; Rα ; fα ] k β[aβ ; σβ ; ιβ ; Fβ ; Rβ ; fβ ] k P −→ α→β ]; fβ ] α[R [ι f ]; σ′α ; ια ; Fα ; Rα ; fα ] k β[aβ ; σ′β ; ιβ ; Fβ ; Rβ :: [m j ; ι′′ ; fi R = R′ :: [m j ; ιr ; f ′ ] :: R′′

(NEWACT)

(SERVE)

(ENDSERVICE)

σ′α = Copy&Merge(σβ , ι f ; σα , ι)

α[aα ; σα ; ια ; Fα ; Rα ; fα ] k β[aβ ; σβ ; ιβ ; Fβ ; Rβ ; fβ ] k P −→ α[aα ; σ′α ; ια ; Fα ; Rα ; fα ] k β[aβ ; σβ ; ιβ ; Fβ ; Rβ ; fβ ] k P

(REPLY)

Table 2. Parallel reduction (used or modified values are non-gray) The function Merge merges two stores (it merges independently σ and σ′ except for ι which is taken from σ′ ): Merge(ι, σ, σ′ ) = σ′ θ + σ where θ = { ι′ ← ι′′ | ι′ ∈ dom(σ′ ) ∩ dom(σ)\{ι}, ι′′ fresh}} copy(ι, σ) will designate the deep copy of store σ starting at location ι. That is the part of store σ that contains the object σ(ι) and, recursively, all (local) objects which it references. The deep copy is the smallest store satisfying the rules of Table 3. The deep copy stops when a generalized reference is encountered. In that case, the new store contains the generalized reference. In Table 3, the first two rules specify which locations should be present in the created store, and the last one means that the codomain is similar in the copied and the original store (copy of the objects values). A deep copy can be calculated by marking the source object and recursively all objects referenced by marked objects. When a fix-point is reached, the deep copy is the part of store containing marked objects.

LOCAL inside each activity, a local reduction can occur following the rules of Table 1. Note that sequential rules FIELD, INVOKE, UPDATE , CLONE 5 are stuck (wait-by-necessity) when the target location is a generalized reference. Only REQUEST allows to invoke an active object method, and REPLY may transform a future reference into a reachable object (ending a wait-by-necessity). NEWACT creates a new activity γ containing the deep copy of the object and empty current future, pending requests and future values. A generalized reference to this activity AO(γ) is stored in the source activity α. Other references to ι in α are unchanged (still pointing to a passive object). m j specifies the service (first method executed). It has no argument. If no service method is specified, a FIFO service is performed. An infinite loop Repeat and the FIFO service are defined below (M is the set of all method labels defined by the activated object):

Repeat(a) Fi f oService REQUEST

ι ∈ dom(copy(ι, σ)) ι′ ∈ dom(copy(ι, σ)) ⇒ locs(σ(ι′ )) ⊆ dom(copy(ι, σ)) ι′ ∈ dom(copy(ι, σ)) ⇒ copy(ι, σ)(ι′ ) = σ(ι′ ) Table 3. Deep copy The following operator deeply copies the part of the store σ starting at the location ι at the location ι′ of the store σ′ , except for ι′ , the deep copy is added in a new part of the store σ′ : Copy&Merge (σ, ι ; σ′ , ι′ )

, Merge(ι′ , σ′ , copy(ι, σ){{ι ← ι′ }})

Reduction contexts become :

, [repeat = ς(x)a; x.repeat()].repeat() , Repeat(Serve(M ))

sends a new request from activity α to activity β (Figα→β

ure 3). A new future f i is created to represent the result of the request, a reference to this future is stored in α. A request containing the name of the method, the location of a deep copy of the α→β argument stored in σβ , and the associated future ([m j ; ι′′ ; fi ]) is added to the pending requests of β (Rβ ). SERVE serves a new request (Figure 4). The current reduction is stopped and stored as a continuation (future f , expression R [[]]) and the oldest (first received) pending request concerning one of the labels specified in M is treated. The activity is stuck until a matching request is found in the pending requests queue.

R ::= . . . | Active(R , m j )| R ⇑ f , a The rules of Table 2 present the formal semantics of ASP (the concatenation of lists will be denoted by ::):

5 cloning future is considered as a strict operation to ensure determinism.

α





β

f

ι

f'

′ f

ι.l j (ι′ )

reply

REQUEST

β

α



f

ι′

f'

f'

ι′′

f ιf

lj

Figure 3.

Figure 6.

REQUEST

REPLY

of future f

Initial configuration

α

An initial configuration consists of a single activity, called main ac/ 0; / 0; / 0; / 0]. / This activity tivity, containing only a current term µ[a; 0; can only communicate by sending requests or receiving replies.

α ι

ιr Serve(M)

Note that the syntax of intermediate terms guarantees that there are no shared references in ASP except future and active object references.

ι

mj Request to serve

ENDSERVICE

SERVE

α

α ι a

ιr ια .m j (ιr )

Figure 4.

SERVE

Continuation

Figure 5.

ENDSERVICE

ENDSERVICE applies when the current request is finished (currently evaluated term is reduced to a location). It associates, the location of the result to the future f . The response is (deep) copied to prevent post-service modification of the value and the new current term and current future are obtained from the continuation (Figure 5).

updates a future value (Figure 6). It replaces a reference to a future by its value. Deliberately, it is not specified when this rule should be applied. It is only required that an activity contains a reference to a future, and another one has calculated the corresponding result. The only constraint about the update of future values is that strict operations (e.g. INVOKE) need the real object value of some of their operands. Such operations may lead to wait-by-necessity, which can only be resolved by the update of the future value. Note γ→β can be updated in an activity different from the that a future fi origin of the request (α 6= γ) because of the capability to transmit futures (e.g. as method call parameters). REPLY

Note that an activity may be stuck either on a wait-by-necessity on a future (upon a strict operation), or on the service on a set of labels with no corresponding request in the request queue, or if it tries to access or modify a field on a reference to an active object.

4 Properties and confluence This section starts with a property about object topology inside activities (4.2), then it introduces a notion of compatibility between terms (4.3) and an equivalence modulo replies (4.4). Finally, sufficient condition for confluence between ASP reductions (4.5) and a specification of a set of terms behaving deterministically is given: DON terms (4.6). A static approximation allows us to define a simple deterministic sub-calculus in 4.7. Detailed proofs of properties presented in this section can be found in [6].

4.1 Notations and Hypothesis In the following, αP denotes the activity α of configuration P. We suppose that the freshly allocated activities are chosen deterministically: the first activity created by α will have the same identifier for all executions. We consider that the future identifier f i is the name of the invoked method indexed by the number of requests that have already been received by β. Thus if the 4th request received by β comes from γ γ→β and concerns method f oo, its future identifier will be f oo4 . ∗

T

−→ will denote the transitive closure of −→, and −→ will denote the application of rule T (e.g. LOCAL, REPLY. . . ).

4.2 Futures and parameters isolation The following theorem states that the value of each future and each request parameter are situated in isolated parts of the store. Figure 7 illustrates the isolation of a future value (on the left) and a request parameter (on the right).

The store σα

α copy (ι f , σα )

copy (ιr , σα )

Active Store f

f

D EFINITION 2 (R EQUEST S ENDER L IST ). The request sender list (RSL) is the list of request senders in the order the requests have been received and indexed by the invoked method. The ith element of RSLα is defined by: β→α

(RSLα )i = β f if fi

f

∈ FL(α)

The RSL list is obtained from futures associated to served requests, current requests and pending requests. D EFINITION 3 (RSL COMPARISON the prefix order on activities: α1 f1 . . . αn fn

Figure 7. Store Partitioning T HEOREM 1 Let

(S TORE PARTITIONING ).

ActiveStore(α) = copy(ια , σα )

[

copy(ι, σα ),

M copy(ι , σ ) M copy(ι , σ )

where

L is the disjoint union.

f

α [l j ;ιr ; f ]∈Rα

r

α

This invariant is proved by checking it on each reduction rule. This is mainly due to the deep copies performed inside REQUEST, END SERVICE and REPLY. The part of σα that does not belong to the preceding partition may be freely garbage collected.

4.3 Configuration Compatibility The principles are the following: two configurations are compatible if the served, current and pending requests of one is a prefix of the same list in the other; moreover, if two requests can not interfere, that is to say if no Serve(M) can concern both requests, then this requests can be safely exchanged. In ASP, the order of activities sending requests to a given one fully determines the behavior of the program (Theorem 3, RSL-confluence). Note that this means that futures updates and imperative aspects of ASP do not act upon the result of evaluation. Let FL(α) denote the list of futures corresponding to requests adressed to activity α: D EFINITION 1 (F UTURES LIST ). Let FL(α) be the list of futures that have been calculated, the current futures (the one in the activity and all those in the continuation of the current expression) and futures corresponding to pending requests. It is depicted by the rectangles of Figure 1. β→α

β→α

7→ ι} ∈ Fα } :: { fα } :: F (aα ) β→α β→α :: { fi |[m j , ι, fi ] ∈ Rα }  F (a ⇑ f , b) = f :: F (b) where F (a) = 0/ if a 6= a′ ⇑ f , b

FL(α) = { fi

|{ fi

. . . α′m

fm′

⇔ n ≤ m ∧ ∀i ∈ [1..n], αi = α′i

1

At any stage of computation, each activity has the following invariant:   { f 7→ι f }∈Fα

′ 1

RSLs are ordered by

D EFINITION 4 (RSL COMPATIBILITY ). Two RSLs are compatible if one is prefix of the other:

ι∈locs(aα )

σα ⊇ActiveStore(α)

 α′1 f

).

RSLα

1 RSLβ

⇔ ⇔

RSLα ⊔ RSLβ exists RSLα RSLβ ∨ RSLβ



 RSLα

Let MαP be a static approximation of the set of M that can appear in the Serve(M) instructions of αP . For a given source program P0 , for each activity α created, we consider that there is a set MαP0 such that if α will be able to perform a Serve(M) then M ∈ MαP0 . D EFINITION 5 (P OTENTIAL SERVICES ). Let P0 be an initial configuration. MαP0 is any set verifying: ∗

P0 −→ P ∧ aαP = R [Serve(M)] ⇒ M ∈ MαP0 Let RSLα M represent the restriction of the RSL list on the set of labels M ((α f0 :: β f1 :: γ f2 ) f , f = α f0 :: γ f2 ). 0

2

Two configurations, deriving from the same source term, are said to be compatible if all the restriction of their RSL that can be served are compatible:

1

D EFINITION 6 (C ONFIGURATION COMPATIBILITY: P Q). ∗ ∗ If P0 is an initial configuration such that P0 −→ P and P0 −→ Q P Q ⇔∀α ∈ P ∩ Q, ∀M ∈ Mα , RSLα RSLα

1

P0

P

M

1

Q

M

Following the RSL definition (Definition 2) the configuration compatibility only relies on the arrival order of requests; the future list (FL) order (Definition 1), potentially different on served and current requests, does not matter. In the general case, Serve operations can be performed while another request is being served; then the relation between RSL order and FL order can only be determined by a precise study. If no Serve operation is performed while another request is being served (only the service method performs Serve operations), then all the restrictions (to potential services) of the RSL and the FL are in the same order. In the FIFO case, the FL order and the RSL order are the same.

4.4 Equivalence modulo replies Let ≡ denote the equivalence modulo renaming of locations and futures (renaming of activities is not necessary as activities are created deterministically).

Furthermore ≡ allows to exchange pending requests that can not interfere. Indeed, pending request can be reordered provided the compatibility of RSLs is maintained: requests that can not interfere (because they can not be served by the same Serve primitive) can be safely exchanged. Modulo these allowed permutations, equivalent configurations are composed of equivalent pending requests in the same order. More formally, ϕ is a valid permutation on the request queues of P if P ϕ(P).

1

Equivalence modulo future replies (P ≡F Q) is an extension of ≡ authorizing the update of some calculated futures. This is equivalent to considering references to a future already calculated as equivalent to the local reference to the part of store which is the (deep copy of the) future value. Or, in other words, a future is equivalent to a part of store if this part of store is equivalent to the store which is the (deep copy of the) future value (provided the updated part does not overlap with the remaining of the store). Two equivalent definitions of equivalence modulo future replies have been formalized in [6]. Note that this equivalence is decidable. As explained informally here, two configurations only differing by some future update sare equivalent: P −→ P′ ⇒ P ≡F P′ REPLY

More precisely, we have the following sufficient condition for equivalence modulo future replies: ( REPLY P1 −→ P′ ⇒ P1 ≡F P2 REPLY P2 −→ P′ But this condition is not necessary as it does not deal with mutual references between futures. Indeed, in case of a cycle of futures one can obtain configurations that will never converge but behave identically (Figure 8). β

α

f1

T HEOREM 2

(E QUIVALENCE AND PARALLEL REDUCTION).





T

T

P =⇒ Q ∧ P ≡F P′ ⇒ ∃Q′ , P′ =⇒ Q′ ∧ Q′ ≡F Q This important theorem states that if one can apply a reduction rule on a configuration then, after several REPLY, a reduction using the same rule can be applied on any equivalent configuration.

Idea of the proof Theorem 2 is a direct consequence of the following property: P ROPERTY 1.



T

T

P −→ Q ∧ P ≡F P′ ⇒ ∃Q′ , P′ =⇒ Q′ ∧ Q′ ≡F Q Indeed, if a reduction can be made on a configuration then the same one (up to equivalence) can be made on an equivalent configuration. The proof is decomposed in two parts. First, we may need to apply one REPLY rule to be able to perform the same rule on the two terms : if we cannot apply the same reducT tion than P −→ Q (same rule on the same activities . . . ) on P′ , we REPLY apply −→ enough times to be able to apply the reduction on P′′ REPLY ∗

T

(P′ −→ P′′ , P′′ −→ Q′ ). It is straightforward to check that if two configurations are equivalent, the same reduction can be applied on the two configuration except if one of them is stuck. The second part of the proof consists in verifying that the application of the same reduction rule on equivalent terms leads to equivalent terms. This is done by a long case study (not detailed here).

2

4.5 Confluence Two configurations are said to be confluent if they can be reduced to equivalent configurations.

f2

(C ONFLUENCE : P1 . P2 ).   ∗ ∗ P1 . P2 ⇔ ∃R1 , R2 , P1 −→ R1 ∧ P2 −→ R2 ∧ R1 ≡F R2

D EFINITION 8 P1

REPLY

α

β

f1

REPLY

α

P2

β

f2

f2

f1

Figure 8. Updates in a cycle of futures

The simplest definition of equivalence consists in following paths from the root object of equivalent activities. If the same paths can be followed in both configurations then the two configurations are equivalent. Of course, paths are insensitive to the following of calculated future references.

The principles of confluence property can be summarized by: the only potential source of non-confluence is the interference of two REQUEST rules on the same destination activity; the order of updates of futures does not have any influence on the reduction of a term. Note that even if this property is natural, it allows a lot of asynchrony, and flexibility in futures usage and updates even in an imperative object calculus. The fact that, for non-FIFO service, the order of requests does not matter if they cannot be involved in the same Serve primitive allows us to extend the preceding principle. Thus, two compatible configurations obtained from the same term are confluent. T HEOREM 3

Let T ∈ {LOCAL, NEWACT, REQUEST, SERVE, ENDSERVICE, REPLY} be any parallel reduction. Then let us denote by =⇒ the reduction −→ preceded by some applications of the REPLY rule.



D EFINITION 7



T

=⇒

=

(R EDUCTION WITH FUTURE UPDATES).

REPLY ∗

T

REPLY ∗

−→ −→ if T 6= REPLY and −→ if T = REPLY

(RSL C ONFLUENCE ).





P −→ Q1 ∧ P −→ Q2 ∧ Q1

1 Q2 =⇒ Q1 . Q2

Idea of the proof The key idea is that if two configurations are compatible, then there is a way to perform missing sending of requests in the right order.

Thus the configurations can be reduced to a common one (modulo future replies equivalence). Let Q be the set of configurations obtained from P and compatible with Q1 and Q2 The proof of diamond property on −→ is a long case study on conflicts between rules. Finally, we obtain:   T2  T1 S −→ S′   S −→  1 T 1 S1 1 ′ T2 =⇒ S1 ≡F S2 ∨ ∃S1′ , S2′ , S2 −→ S2 S −→ S2 ′ ≡ S′     S  1′ ′F 2 S, S1 , S2 ∈ Q S1 , S2 ∈ Q

Note that the problem of conflicts between REQUEST rules is solved by the introduction of RSL and compatibility between RSL. Note also that the case where T1 or T2 is REPLY is not necessary for the proof of Theorem 3 because in that case Theorem 2 is sufficient to conclude. Also note that the determinism of previous reductions implies that the prefix order on activities is a sufficient condition for RSL compatibility: the fact that received requests come from the same activities implies that these requests are the same (have the same argument and method invoked).

The transition from the preceding property to the diamond property on =⇒ (Property 2) can be summarized by the diagram in Figure 9. More details on this proof can be found in [6].



P ROPERTY 2 (D IAMOND ).   T1 ′     P =⇒ Q 1   1   T2 ′ ′ ′ P2 =⇒ Q2 =⇒ Q1 ≡F Q2 ∨ ∃R1 , R2 ,       Q′1 , Q′2 ∈ Q   P1 ≡F P2

 

P1

= −→

4.6 Deterministic Object Networks The work of Kahn and MacQueen on process networks [18] suggested us the following properties ensuring the determinism of some programs. In process networks, determinacy is ensured by the facts that channels have only one source, and destinations read data independently (values are broadcasted to all destination processes). And, most importantly, the reading of entry in the buffer is blocking: the order of reading on different channels is fixed for a given program. In ASP, the semantics ensures that Serve are blocking primitives. Moreover, ensuring compatibility implies that two activities cannot send concurrently a request on a given method (or set of method labels M that appears in a Serve(M)) of the same activity. In order to formalize this principle, Deterministic Object Networks (DON) are defined below. D EFINITION 9 (DON). A configuration P, derived from an initial configuration P0 , is a Deterministic Object Network (DON(P)) if :  ∀α ∈ Q, ∀M ∈ MαP0 , ∃1 β ∈ Q, ∃m ∈ M, ∗ P −→ Q ⇒ aβ = R [ι.m(. . .)] ∧ σβ (ι) = AO(α) where ∃1 means “there is at most one”

T1 Q′2 =⇒ R2 R1 ≡F R2 R1 , R2 ∈ Q

A program is a deterministic object network if at any time, for each set of label M on which α can perform a Serve primitive, only one activity can send a request on methods of M. Consequently, DON terms always reduce to compatible configurations:

P2

P ROPERTY 3



= =⇒ R

 

T2

Q′1 =⇒ R1

then ASP calculus would be deterministic. Such a calculus would be more similar to process networks where get are performed on a given channel and a channel only have one source process. Furthermore, the order of return of results still would not act upon confluence, and futures would provide powerful implicit channels for results.

T1

R

R

= −→ = ≡F

P1′



P2′ R

T1

S

Q′1

T2

S1 T2 S1′

Q′2

S2 T1 S2′ T1

T2



1 Q2

DON definition implies that two activities can not be able to send requests that can interfere to the same third activity then DON(P) ensures RSL compatibility between terms obtained from P. Indeed, suppose two requests could intefere in the same Serve(M) inside the activity γ. Then there would be a reduction from P that would lead to a term where two different activities try to send these request to γ. This would be contradictory with DON definition. Thus the set of DON terms is a deterministic sub-calculus of ASP:

T2

T1

COMPATIBILITY ).

DON(P) ∧ P −→ Q1 ∧ P −→ Q2 ⇒ Q1

T2

REPLY ∗

(DON AND

T HEOREM 4

(DON DETERMINISM ).

DON(P) ∧ P −→ Q1 ∧ P −→ Q2 =⇒ Q1 . Q2 ∗

R1

We have shown here that we can easily identify a sub-calculus (DON terms) of ASP that is deterministic.

R2

Figure 9. Diamond property proof Theorem 3 is a classical consequence of Property 2.



2

Note that, if we replaced the primitive Serve(M) by a primitive allowing to serve a request coming from a given activity Serve(α)

4.7 Application: tree topology In this section we propose a simple static approximation of DON terms which has the advantage to be valid even in the highly interleaving case of FIFO services.

Let us consider the request flow graph, that is to say the graph where nodes are activities and there is an edge between two activities if one activity sends requests to another one (α →R β if α sends requests to β). If, at every step of the reduction, the request flow graph is a tree then for each α, RSLα contains occurrences of at most one activity. ∗ ∗ Then for all Q and R such that P −→ Q ∧ P −→ R, we have Q R. As a consequence, we can conclude:

1

T HEOREM 5 (T REE REQUEST FLOW GRAPH ). If at any time the request flow graph forms a set of trees then the reduction is deterministic. This theorem proves the determinism of the binary tree of Figure 2. In general, such a property is useful to prove deterministic behavior of any tree-like part of a program. Determinism Theorem 5 is easy to specify but difficult to ensure. For example, a program that selectively serves different methods coming from different activities will still behave deterministically upon out of order receptions between those methods. This is a direct consequence of DON property that is not directly related to object topology and such a program will not verify the Theorem 5. FIFO service is, to some extent, the worst case with respect to determinism, as any out of order reception of requests will lead to non-determinism. At the opposite, a request service by source activity (i.e. if service primitive was of the form Serve(α)) would be entirely confluent. Even if DON definition allows much more flexibility in the general case, it seems difficult to find a more precise property for FIFO services.

4.8 A deterministic example: The binary Tree The Binary Tree of section 2 verifies Theorem 5 and thus behaves deterministically provided, at each time, at most one client can add new nodes. Figure 10 illustrates the evaluation of the term: let tree = (BT.new).add(3,4).add(2,3).add(5, 6).add(7, 8) in [a = tree.search(5),b = tree.search(3)].b := tree.search(7)

This term behaves in a deterministic manner whatever order of replies occurs. Flow of requests Flow of (indirect) replies

3

2

1

3

Client

4

5

6

7

8

Figure 10. Concurrent replies in the binary tree case

Now consider that the result of a preceding request is used to create a new node (dotted lines in Figure 10): let tree = (BT.new).add(3,4).add(2,3).add(5, 6).add(7, 8) in let Client = [a = tree.search(5),b = tree.search(3)] in Client.b := tree.search(7); tree.add(1,Client.a)

Then the future update that fills the node indexed by 1 can occur at any time since we do not need the value associated to this node. Consequently the future update can occur directly from node number 5 to node number 1.

5 Related works The ASP-calculus is based on the untyped imperative object calculus of Abadi and Cardelli (impς-calculus of [1]). ASP local semantics looks like the one of [11] but we did not find any concurrent object calculus [12, 15, 24] with a similar way of communication between asynchronous objects (no shared memory, asynchronous communication, futures, . . . ). Obliq [4] is a language based on the ς-calculus that expresses both parallelism and mobility. It is based on threads communicating with a shared memory. Like in ASP, calling a method on a remote object leads to a remote execution of the method but this execution is performed by the original thread (or more precisely the original thread is blocked). Moreover, for a non-serialized object, many threads can manipulate the same object. Whereas in ASP, the notion of executing thread is linked to the activity and thus every object is “serialized” but a remote invocation does not stop the current thread. Finally, in ASP data-driven synchronization is sufficient and no notion of thread is necessary: we have a process for each activity. Øjeblik [23] is a sufficiently expressive subset of Obliq which has a formal semantics. The generalized references for all mutable objects, the presence of threads and the principle of serialization (with mutexes) make the Obliq and Øjeblik languages very different from ASP. Halstead defined Multilisp [13], a language with shared memory with futures. But the combination of shared memory and side effects prevents Multilisp from being determinate. ASP can be rewritten in π-calculus [22] but this would not help us to prove confluence property directly. Under certain restrictions [20, 30], π-calculus terms can be statically proved to be confluent and such results could be applicable to some ASP terms. π-calculus terms communicate over channels then a notion of channels may be introduced in ASP. Let a channel be a pair (destination activity, set of method label) and suppose every serve primitive concerns a single label (if several methods can be served by the same primitive, then they must belong to the same channel). A communication over a channel (α, f oo) is equivalent to a remote method call on the method f oo of the active object of α. If at any time only one activity can send a request on a given channel then the term verifies the DON property and the program behaves deterministically. In π-calculus, such programs would be considered as using only linearized channels and would lead to the same conclusion. Indeed la linear channel is a channel on which only one input and one output can be performed. The unicity of destination is ensured by the definition of channels and requests, and the unicity of source process is ensured by the DON property. Note that in ASP, updates of response along non-linearized channels can be performed which makes ASP confluence property more powerful. Moreover, this definition of channels is more flexible be-

cause it can contain several method labels and then, one can wait for a request on any subset of the labels belonging to a channel, in other words we can perform a Serve on a part of a channel without losing determinacy.

Finally, the importance of tree topology in concurrent computations as already been underline in [26] but this model is based on a spreadsheet framework and does not ensure determinism.

Pierce and Turner used PICT (a language derived from π-calculus) to implement object-based programming and synchronization based on channels in [25]. But, all languages derived from πcalculus necessitate explicit channel based synchronization rather than the implicit data-flow synchronization proposed in the current paper which accounts very much for ASP expressivity.

6 Conclusion

The join-calculus [9, 10] is a calculus with mobility and distribution. Synchronization in join-calculus is based on filtering patterns over channels. The differences between channel synchronization and data-driven synchronization also make the join-calculus inadequate for expressing ASP principles. Process networks [17] provide confluent parallel processes but require that the order of service is predefined and two processes cannot send data on the same channel which is more restrictive and less concurrent than ASP. The ASP channel view introduced above can also be compared to Process Networks channels. Like in the π-calculus case, ASP channels seem more flexible and our property more general especially by the fact that future updates can occur at any time: return channels do not have to verify any constraint and Serve can be performed on a part of a “channel”. πoβλ [16] is a concurrent object-oriented language. A sufficient condition is given for increasing the concurrency without losing determinacy, it is based on a program transformation. Under this condition, one can return result from a method before the end of its execution. Then, the execution of the method continues in parallel with the caller thread. This sufficient condition is expressed by an equivalence between original and transformed program. Sangiorgi [27], and Liu and Walker [21] proved the correctness of transformations on πoβλ described in [16]. In πoβλ, a caller always wait for the method result: synchronous method call with anticipated result. In ASP, method calls are systematically asynchronous, thus more instructions can be executed in parallel: the futures mechanism allows one to continue the execution in the calling activity without having the result of the remote call. A simple extension to ASP could provide a way to assign a value to a future before the end of the execution of a method. Note that in πoβλ, this characteristic is the source of parallelism whereas in ASP this would simply allow an earlier future update. Relying on the active object concept, the ASP model is rather closed to, and was somehow inspired by, the notion of actors [2, 3]. Both relies on asynchronous communications, but actors are rather functionals, while ASP is in an imperative and object-oriented setting. While actors are interacting by asynchronous message passing, ASP is based on asynchronous method calls, which remain strongly typed and structured, with future-based synchronizations and without explicit continuations. To some extend, ASP future semantics, with the store partitioning property (isolation between future values, the active store, and the pending requests), accounts for the capacity to achieve confluence and determinism in an imperative setting. More generally, parallel purely functional evaluators are deterministic and have been widely studied [19, 14].

In this paper, we have introduced a parallel object calculus with asynchronous communications. It is based on a sequential calculus a` la Abadi-Cardelli extended with a primitive that creates a new process (activity). Communications in ASP are based on asynchronous requests sends and replies. Simple conditions allow to specify parallel and distributed applications that behave deterministically. A compatibility condition on the Request Sender List (RSL: ordered list of activities that have sent requests) characterizes confluence between terms. Then, a set of Deterministic Object Networks (DON) programs which behaves deterministically have been identified. While requests can be non-deterministically interleaved in the pending queue, DON programs will always serve them in a deterministic manner. Finally, a simple application to tree-like configurations exhibited a deterministic sub-calculus of ASP. An interesting property of our calculus is that the order in which the replies to requests occur has no influence on determinism. This provides parallelism as a process can continue its activity while still expecting the result of several requests. Moreover, on a practical side, this property allows to perform future updates with separate threads. Even if parts of this work can be seen as an application of π-calculus linearized channels or process networks, ASP calculus provides a powerful generalization of these techniques: futures create implicit channels providing both convenient return of results and datadriven synchronizations. Moreover, the order of replies does not act upon programs behavior. No typing of the ASP calculus has been proposed in this paper. Of course, Abadi and Cardelli typing of objects could be adapted to ASP. But a more promising perspective lies in a type system for provided and required services that could allow us to approximate the potential services, and most importantly to statically identify a lot of deterministic programs. More generally, a larger checking of the DON condition by applying static analysis techniques could be performed.

Acknowledgments We thank Andrew Wendelborn and Davide Sangiorgi for comments on earlier versions of this paper.

7 References [1] M. Abadi and L. Cardelli. A Theory of Objects. SpringerVerlag New York, Inc., 1996. [2] G. Agha, I. A. Mason, S. Smith, and C. Talcott. Towards a theory of actor computation (extended abstract). In W. R. Cleaveland, editor, CONCUR’92: Proc. of the Third International Conference on Concurrency Theory, pages 565–579. Springer, Berlin, Heidelberg, 1992.

[3] G. Agha, I. A. Mason, S. F. Smith, and C. L. Talcott. A foundation for actor computation. Journal of Functional Programming, 7(1):1–72, 1997. [4] L. Cardelli. A language with distributed scope. In Conference Record of the 22nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’95), pages 286–297, San Francisco, January 22–25, 1995. ACM Press. [5] D. Caromel. Toward a method of object-oriented concurrent programming. Communications of the ACM, 36(9):90–102, Sept. 1993. [6] D. Caromel, L. Henrio, and B. P. Serpette. Asynchronous sequential processes. Technical report, INRIA Sophia Antipolis, 2003. RR-4753. [7] D. Caromel, W. Klauser, and J. Vayssi`ere. Towards seamless computing and metacomputing in Java. Concurrency: Practice and Experience, 10(11–13):1043–1061, 1998. Proactive available at http://www.inria.fr/oasis/proactive. [8] C. Flanagan and S. Qadeer. A type and effect system for atomicity. In Proceedings of the ACM SIGPLAN 2003 conference on Programming language design and implementation, pages 338–349. ACM Press, 2003. [9] C. Fournet and G. Gonthier. The reflexive CHAM and the join-calculus. In Conference Record of the 23rd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL’96), pages 372–385, St. Petersburg, Florida, January 21–24, 1996. ACM Press. [10] C. Fournet, G. Gonthier, J. Levy, L. Maranget, and D. Remy. A Calculus of Mobile Agents. In U. Montanari and V. Sassone, editors, Proc. 7th Int. Conf. on Concurrency Theory (CONCUR), volume 1119 of Lecture Notes in Computer Science, pages 406–421, Pisa, Italy, Aug. 1996. Springer-Verlag, Berlin. [11] Gordon, Hankin, and Lassen. Compilation and equivalence of imperative objects. FSTTCS: Foundations of Software Technology and Theoretical Computer Science, 17, 1997. [12] A. D. Gordon and P. D. Hankin. A concurrent object calculus: Reduction and typing. In Proceedings HLCL’98. Elsevier ENTCS, 1998. [13] R. H. Halstead, Jr. Multilisp: a language for concurrent symbolic computation. ACM Transactions on Programming Languages and Systems (TOPLAS), 7(4):501–538, 1985. [14] K. Hammond. Parallel Functional Programming: An Introduction (invited paper). In H. Hong, editor, First International Symposium on Parallel Symbolic Computation (PASCO’94), Linz, Austria, pages 181–193. World Scientific Publishing, 1994. [15] A. Jeffrey. A distributed object calculus. In ACM SIGPLAN Workshop Foundations of Object Oriented Languages, 2000. [16] C. B. Jones and S. Hodges. Non-interference properties of a concurrent object-based language: Proofs based on an operational semantics. In B. Freitag, C. B. Jones, C. Lengauer, and H.-J. Schek, editors, Object-Orientation with Parallelism and Persistence, chapter 1, pages 1–22. Kluwer Academic Publishers, 1996. ISBN 0-7923-9770-3. [17] G. Kahn. The semantics of a simple language for parallel programming. In J. L. Rosenfeld, editor, Information Processing ’74: Proceedings of the IFIP Congress, pages 471–475. North-Holland, New York, NY, 1974.

[18] G. Kahn and D. MacQueen. Coroutines and Networks of Parallel Processes. In B. Gilchrist, editor, Information Processing 77: Proc. IFIP Congress, pages 993–998. North-Holland, 1977. [19] O. Kaser, S. Pawagi, C. R. Ramakrishnan, I. V. Ramakrishnan, and R. C. Sekar. Fast parallel implementation of lazy languages - the EQUALS experience. In LISP and Functional Programming, pages 335–344, 1992. [20] N. Kobayashi, B. C. Pierce, and D. N. Turner. Linearity and the pi-calculus. In Proceedings of POPL ’96, pages 358–371. ACM, Jan. 1996. [21] X. Liu and D. Walker. Confluence of processes and systems of objects. In P. D. Mosses, M. Nielsen, and M. I. Schwarzbach, editors, TAPSOFT ’95: Theory and Practice of Software Development, 6th International Joint Conference CAAP/FASE, volume 915 of LNCS, pages 217–231. Springer, 1995. [22] R. Milner, J. Parrow, and D. Walker. A calculus of mobile processes, part I/II. Journal of Information and Computation, 100:1–77, Sept. 1992. [23] U. Nestmann, H. H¨uttel, J. Kleist, and M. Merro. Aliasing models for mobile objects. Information and Computation, 175(1):3–33, 2002. [24] O. Nierstrasz. Towards an object calculus. In M. Tokoro, O. Nierstrasz, and P. Wegner, editors, Proceedings of the ECOOP’91 Workshop on Object-Based Concurrent Computing, volume 612 of LNCS, pages 1–20. Springer-Verlag, 1992. [25] B. C. Pierce and D. N. Turner. Concurrent objects in a process calculus. In T. Ito and A. Yonezawa, editors, Proceedings Theory and Practice of Parallel Programming (TPPP 94), pages 187–215, Sendai, Japan, 1995. Springer LNCS 907. [26] Y. ri Choi, A. Garg, S. Rai, J. Misra, and H. Vin. Orchestrating computations on the world-wide web. In B. Monien and R. Feldmann, editors, Euro-Par, volume 2400 of Lecture Notes in Computer Science. Springer, 2002. [27] D. Sangiorgi. The typed π-calculus at work: A proof of Jones’s parallelisation theorem on concurrent objects. Theory and Practice of Object-Oriented Systems, 5(1), 1999. An early version was included in the Informal proceedings of FOOL 4, January 1997. [28] B. C. Smith. Reflection and semantics in lisp. In Conference Record of the Eleventh Annual ACM Symposium on Principles of Programming Languages, pages 23–35, Salt Lake City, Utah, January 15–18, 1984. ACM SIGACT-SIGPLAN, ACM Press. [29] G. L. Steele, Jr. Making asynchronous parallelism safe for the world. In ACM, editor, POPL ’90. Proceedings of the seventeenth annual ACM symposium on Principles of programming languages, January 17–19, 1990, San Francisco, CA, pages 218–231, New York, NY, USA, 1990. ACM Press. [30] M. Steffen and U. Nestmann. Typing confluence. Interner Bericht IMMD7-xx/95, Informatik VII, Universit¨at ErlangenN¨urnberg, 1995.