An Extension to Pi-Calculus for Performance

0 downloads 0 Views 150KB Size Report
Pi-Calculus is a formal method for describing and analyzing the behavior of large distributed and concurrent ... Keywords: Pi-calculus, Performance Evaluation, Multi-Agent Systems, System Modeling. 1. ...... the 7th International Conference on Formal Methods for ... [15] S. M. Ross, “Stochastic Processes,” 2nd Edition, Wiley,.
Journal of Software Engineering and Applications, 2011, 1, 9-17 doi:10.4236/jsea.2011.41002 Published Online January 2011 (http://www.scirp.org/journal/jsea)

An Extension to Pi-Calculus for Performance Evaluation Shahram Rahimi, Elham S. Khorasani, Yung-Chuan Lee, Bidyut Gupta Department of Computer Science, Southern Illinois University, Illinois, USA. Email: {rahimi, elhams, yclee, bidyut}@siu.edu Received October 12th, 2010; revised November 25th, 2010; accepted November 30th, 2010.

ABSTRACT Pi-Calculus is a formal method for describing and analyzing the behavior of large distributed and concurrent systems. Pi-calculus offers a conceptual framework for describing and analyzing the concurrent systems whose configuration may change during the computation. With all the advantages that pi-calculus offers, it does not provide any methods for performance evaluation of the systems described by it; nevertheless performance is a crucial factor that needs to be considered in designing of a multi-process system. Currently, the available tools for pi-calculus are high level language tools that provide facilities for describing and analyzing systems but there is no practical tool on hand for pi-calculus based performance evaluation. In this paper, the performance evaluation is incorporated with pi-calculus by adding performance primitives and associating performance parameters with each action that takes place internally in a system. By using such parameters, the designers can benchmark multi-process systems and compare the performance of different architectures against one another. Keywords: Pi-calculus, Performance Evaluation, Multi-Agent Systems, System Modeling

1. Introduction Pi-calculus, introduced by Robin Milner [1-5], stands out as one of the most powerful modeling tools for modeling and analyzing the behavior of concurrent computational systems such as: multi agent systems, grid computing systems, and so forth. Pi-calculus defines the syntax of processes and actions and provides transition rules for analyzing and validating system functionalities. Mobility in pi-calculus is achieved by sending and receiving channel names through which the recipient can communicate with the sender or even with other processes, provided the channel is a free name. With the interaction between the processes changing, the configuration of the whole system is also changed and mobility is attained. In the higher-order pi-calculus [5], computational entities (such as objects, process, or a code segment) can be transferred as well as the values and channel names. Because of its strong support for mobility, higher order pi-calculus is very flexible and powerful in modeling mobile agent systems. As mentioned before, however, pi-calculus by itself does not provide any method for performance evaluation of systems described by it; nevertheless, performance is an important factor that needs to be considered in designing of a multi-process system. The traditional Copyright © 2011 SciRes.

software development methods delay the performance test until after the implementation is completed. This approach has its obvious drawback as if the system is found not satisfying the desired performance requirements, then most of the implementation has to be redone. So evaluating the performance at the design stage has significant importance and can save much of the developers’ time, energy and cost. The previous works for integrating the process algebra with performance measures have either extended the syntax and semantics of the pi-calculus to account for performance, or derived quantitative performance measures from the system transitions. In the former approach, a family of stochastic process algebra [6-14] has been introduced that extends the standard process algebra with timing or probability information. In this algebra, every action has an associated duration which is assumed to be a random variable with an exponential distribution. A prefix then is in the form of (a,r).P, where a represents an action with a duration that is exponentially distributed with parameter r, known as the action rate. The memory-less property of the exponential distribution conforms to the Markov property thus Markov chains [15-16] are applied to compute the probability of reaching each state. Consequently, the final JSEA

An Extension to Pi-Calculus for Performance Evaluation

10

performance of the system can be calculated given the reward model. One potential problem with providing explicit action rates is that when the action rates are not known in advance, a different symbol is used for each action rate which leads to a heavy computational overhead in performance calculation. In the latter approach [17-18], labels of the transition systems are enhanced to carry additional information about the inference rule which was applied to the transition. So instead of giving an explicit action rate for each action, the probabilistic distributions are computed from the transition labels by associating a cost function to each transition. Then Markov chain is applied as before to calculate the final performance. In this paper we follow the latter approach, in that we leave the syntax of the pi-calculus intact and associate performance parameters with each inference rule. However, our approach differs from the previous works in two aspects: first, the cost functions proposed in the previous works are too abstract to be directly applied to real cases. Thus, in this paper a cost function is replaced by a concrete transition rate function that estimates the duration time for each transition rule. Second, our transition rates account for process mobility such as transferring objects or codes from one environment to another, considering the fact that the process mobility (such as mobile agents) is different from simple name exchange in its complexity and additional overhead [19]. The rest of the paper is organized as follows: we start with some background knowledge on pi-calculus syntax and its transition rules. Then we review the continuous time Markov chains, their steady state analysis and reward models. Finally, we introduce the transition rate function and establish our performance evaluation methodology based on it. A conclusion section summaries the paper at the end and gives a prospective for future work in this area.

2. Pi-Calculus Pi-calculus, an extension of the process algebra CCS, was introduced by Robin Milner in the late 80 s [1]. It provides a formal theory for modeling mobile systems and reasoning about their behaviors. In pi-calculus, two entities are specified, “names” and “processes” (or “agents”). “Name” is defined as a channel or a value that is transferred via a channel. We follow the naming conventions and the syntax of [1] in which u, v, w, x, y, z range over names and A, B, C,  range over process (agent) identifiers. The syntax of an agent (or a process) in pi-calculus is defined as follows:

 iI i Pi yx.P y ( x).P y  x  .P  .P P1 P2 P1  P2  x  P  x  y  P A  y1 , , yn  ! P

P :: 0

Copyright © 2011 SciRes.

 0 is a null process and means that agent P does not do anything.   iI i .Pi : Agent P will behave as either one of        

 

  

i .Pi where i  I , but not more than one, and thebehaves like Pi . If I   , then P behaves like 0. Here i denotes any actions that could take place in P (such as  , y ( x ) , and so on). Here i denotes any actions that could take place in P (such as  , y ( x ) , and so on). yx.P : Agent P sends the free name x out along channel y and then behaves like P. Name x is said to be free if it is not bound to agent P. In the same way, if x is bound to P (also say private to P), that means x can only be used inside by P. y  x  .P : Agent P sends bound name x out along channel y and then behaves like P. y  x  .P : Agent P receives name x along channel y and then behaves like P.  .P : Agent P performs the silent action  and then behaves like P. P1 P2 : Agent P has P1 and P2 executing in parallel. P1 and P2 may behave independently or they may interact with each other. E.g. if P1   1 .P1 and P2   2 .P2 , then P1 and P2 will behave independently. But, if P1  y ( x).P1 and P2  y  x  .P2 then P1 sends name x to P2 via channel y. The sum P1  P2 means either P1 or P2 is executed, but not both.  x  P : is called restriction and means that agent P does not change except for that x in P becomes private to P. That means any outside communication through channel x is prohibited. A match operator [x = y] P means that if x = y, then the agent behaves like P, otherwise it behaves like 0. A  y1 , , yn  is an agent identifier where y1 , , yn are free names occurring in P. !P is called replication and can be thought of as an infinite composition P P P  .

The semantics of the calculus is defined by a set of transition rules which establish the system evolution. A   P ' , which transition rule is in the form of P  ' means that process P evolves to P after performing action  . Action  can be one of the following prefixes:  :: yx y  x  y  x   Pi-calculus transition rules are listed in Table 1. The TAU-ACT, INPUT-ACT, OUTPUT-ACT, SUM and MATCH rules correspond directly to the definition of the silent, input, and output actions and the sum and match operators. The side condition w  fn   z  P  in the INPUT-ACT rule ensures that a free occurrence of w JSEA

An Extension to Pi-Calculus for Performance Evaluation Table 1. Pi-calculus transition rules [1].   P  .P   OUTPUT-ACT: xy xy.P  P TAU-ACT:

INPUT-ACT:

x ( w) x ( z ).P  P

w  fn   z  P 

Table 2. Higher-order pi-calculus labeled transition system [5]. TAU:

   .P  P

PAR:

 P  P'  P Q  P' Q

MATCH:

IDE:



P  Q  P '

 P   P' MATCH:   P'  x  x  P 



IDE:

def

P{ y / x}  P'  A( y )  P'

PAR:

RES:

A( x )  P

 P   P'  P Q   P' Q

bn    fn  Q   

 P   P' RES:   y  P   y  P '

y  n  

x  w x w P   P' Q   Q' CLOSE:  P Q    w ( P' Q' )

P  P xy

'

 P {w y}  y  P  x ( w)

'

yx w  fn   y  P ' 

does not become bounded by the substitution{w/z}. The PAR rule states that the parallel composition does not inhibit computation; the side condition bn    fn  Q    guarantees that a free name in Q does not turn into a bound name by the application of PAR. The COM rule shows the synchronous communication between two processes. The RES rule states that the process P can resume its computation under restriction. The side condition y  n   assures that no conflict will occur between the restricted name y and the names in P. In the CLOSE rule, P sends a bound name w to Q hence it extends the scope of w and makes it available to both P and Q. the Open rule transforms a free output action xy to a bound output x  w  and eliminates the restriction (y). The side condition of the open rule certifies that the bound name w may not occur free in P ' . The higher-order pi-calculus is different from the first-order pi-calculus in its ability to model sending and receiving processes as well as the values and channel names. In higher-order pi-calculus, x  K  means sending a name or a process K via channel x, and x U  means receiving a name or a process via channel x. Because of its support for process mobility, we chose higher order pi-calculus as our target modeling language. The transition rules of the higher order pi-calculus are listed Copyright © 2011 SciRes.

 P  P'  P  Q  P'

 P  P' 

 

U

A K  P '

 

def

, if A U  P

u P' P  u  x P '  x  P 

COM-NAME:

x  n u 

x  z  xy P   P' Q    Q' '  P Q   P ' Q { y z}

y is not a process list.  y  x

x z  xy  P   P ' P   P' Q    Q' COM:   P | Q   P ' | Q '{ y z} P Q   P' Q

OPEN:

SUM:

P'  x  x  P 

P{K U }P '



P  P '

SUM:

11

K

x K

P  P ' Q  Q '  P Q( y )  ( P ' Q ')

COM-PROCESS:

K is a process vector. 

x  w  P   P'

CLOSE-NAME:

x w  Q   Q'



 P Q    w P Q '



'

w  fn  Q  w is not a process  y  z

OPEN:

K

P  P '

P'  x  P   x , y  z

K

 

x  z , x  fn K

y

in Table 2. These rules are basically the same as the transition rules of the first order pi-calculus with an additional rule for process communication (COM-PROCESS).

3. Continuous Time Markov Chains Markov chains are the most commonly used mathematical tools for performance and reliability evaluation of dynamic systems. A Markov chain models a dynamic system with a set of states and a set of labeled transitions between the states. Each label shows the probability of a transitions (in case of the discrete-time Markov chain), or it shows the rate of a transitions (in case of the continuous Markov chain). In this section, we review some basic definitions and theorems from [16] of the stochastic processes and the continuous time Markov chain. Definition 3.1 (Stochastic Process): A stochastic process is defined as a family of random variables X   X t : t  T  where each random variable X t is indexed by parameter t  T , which is usually called the time parameter if T  R   0,   . The set of all possible values of X t (for each t  T ) is known as the state space of the stochastic process. X t is the state of a process at time t. If the index set T is a countable set, then X is a discrete-time stochastic process, otherwise it is a continuous-time process. Definition 3.2 (Continuous Time Markov Chain JSEA

An Extension to Pi-Calculus for Performance Evaluation

12

(CTMC)): A stochastic process  X t : t  T  constitutes a CTMC if it satisfies the Markov property, namely that, given the present state, the future and the past states are independent. Formally, for an arbitrary ti  R 0 with 0  t0  t1    tn  tn 1 , n  N and s  S   0 , the following relation holds for the conditional probability mass function (pmf):





P X tn1  sn 1 X tn  sn , X tn 1  sn 1 , , X t0  s0 



P X tn1  sn 1 X tn  sn





where P X tn1  sn 1 X tn  sn



period of time tn , tn 1  .

Definition 3.3 (Time-homogenous CTMC): A CTMC is called time-homogenous if the transition prob-





depend only on the

time difference tn+1-tn, and not on the actual values of tn+1 and tn. Definition 3.4 (State Sojourn Time): The sojourn time of a state is the time spent in that state before moving to another state. The sojourn time of time-homogenous CTMC exhibits the memory less property. Since the exponential distribution is the only continuous distribution with this property, the random variables denoting the sojourn times, or holding times, must be exponentially distributed. Definition 3.5 (Irreducible CTMC): A CTMC is called irreducible if every state Si is reachable from every other state Si , that is: Si , S j , Si  S j , t  0 : pi , j  t   0

Definition 3.6 (Instantaneous Transition Rates of CTMC): A transition rate qij  t  , measures how quickly a process travels from state i to state j at time t. The transition rates are related to the conditional transition probabilities as follows: pi , j  t , t  t   ,  lim t  t 0 qi , j (t )    lim pi , j (t , t  t ) 1  q ,  ij  t 0 t i j

i j

(3.1) i j

The instantaneous transition rates form the infinitesimal generator matrix Q   qij  A CTMC can be mapped in to a directed graph, where the nodes represent states and the arcs represent transitions between the states labeled by their transition rates. Definition 3.7 (The State Probability Vector): The state probability vector is a probability vector

 t   st1 ,  st2 , , where  sti is the probability that the

Copyright © 2011 SciRes.





probability,  s01 ,  s02 , , the state probability vector at time t is calculated as:

 t   0 P  0, t 

t 0

(3.2)

Because of the continuity of time, Equation (3.2) cannot be easily solved; instead the following differential equation is used [16] to calculate the state probability vector from the transition rates: d j  t 

is the transition proba-

bility to travel from state S n to state Sn 1 during the

abilities P X tn1  sn 1 X tn  sn

process is in state S i at time t. Given the initial state

dt

  qij  t   i  t  t

(3.3)

iS

3.1. The Steady State Analysis For evaluating the performance of a dynamic system, it is usually desired to analyze the behavior of the system in a long period of execution. The long run dynamics of Markov chains can be studied by finding a steady state probability vector. Definition 3.8 (The Steady State Probability Vector): The state probability vector  is steady if it does not change over time. This means that further transitions do not affect the steady state probabilities, and : d  t 

 0 , where   t  is the steady state probdt ability vector. Hence from Equation (3.3): limt 

 qij  t   i  t   0

j  S , or in vector matrix form:

iS

Q  0

(3.4)

Thus the steady state probability vector, if existing, can be uniquely determined by the solution of Equation (3.4), and the condition   i  1 . iS

The following theorem states the sufficient conditions for the existence of a unique steady-state probability vector. Theorem 3.1: a time-homegenous CTMC has a unique steady-state probability vector if it is irreducible and finite. A CTMC for which a unique steady state probability vector exists is called an ergodic CTMC. In an ergodic CTMC, each state is reachable from any other states through one or more steps. The steady-state probability vector  for an ergodic CTMC can be found directly by solving Equation (3.4) as illustrated by the following example: Figure 1 shows an ergodic CTMC containing five states: S1 , S2 , S3 , S4 , S5 when each state is reachable from other states. The value associated with each arc is the instantaneous transition rate between the states. The

JSEA

An Extension to Pi-Calculus for Performance Evaluation

13

ing CTMC is: 0, if Si is not an absorbing state otherwise  1,

i  

Figure 1. A simple ergodic CTMC.

(For an absorbing CTMC, It is usually interesting to find the time to absorption. In other words, we usually want to know how long it takes for the system to step into a final absorbing state (i.e., the system fails to work) from its starting state. From [16], the mean time to absorption (MTTA) is defined as: MTTA   Li   

infinitesimal generator matrix Q for this case would be:  4 4 0 0 0   3 7 2 2 0    Q   0 1 2 1 0     0 3 3 8 2   0 0 0 7 7 

Assuming that    x1 , x2 , x3 , x4 , x5  , from Equation (3.4) we have:  4  x1  3  x2  0  4  x  7  x  x  3 x  0 1 2 3 4  2 2 3 0 x x x        2 3 4  2 x  x 8 x  7 x  0 2 3 4 5  2  x4  7  x5  0

In addition, the summation of all xi 1  i  5  equals to 1, i.e., x1  x2  x3  x4  x5  1

Solving the above liner system for x1 , , x5 , the final value of  is:

 = [7/43, 28/129, 56/129, 56/387, 16/387] Notice that each value in the vector  is greater than zero which verifies the irreducible property of the CTMC, however if the some values of  are equal to 0, two possibilities exist: some states are not reachable or there are some absorbing states. An absorbing state is a state in which no events can occur. Once a system reaches to an absorbing state, it cannot move to any other states. In other words, for an absorbing state, there are only incoming flows. Therefore, there are no outward transitions from this state. Generally, all absorbing states are failed states. A CTMC containing one or more absorbing states is called absorbing CTMC. Usually, if a system has more than one absorbing states, then all of the absorbing states can be merged into a single state. The steady-state probability vector  for an absorbCopyright © 2011 SciRes.

(3.5)

(3.6)

iN

where Li  t  denotes the expected total time spent in state Si during the interval [0, t ) LN    QN   N  0 

(3.7)

QN is a N*N size matrix by restricting the infinitesimal generator matrix Q into non-absorbing states. As an example, assume the CTMC in Figure 2 with an absorbing state S5. Here the state space  is partitioned into the set of absorbing states A  S5  and non-absorbing states N 

S1 , S2 , S3 , S4  .

Thus the infinitesimal generator matrix

Q for the non-absorbing states is reduced to:  4 4 0 0   3 7 2 2   QN    0 1 2 1     0 3 3 8

Suppose  N  0   1, 0, 0, 0 , which means the system starts at state S1 . Form Equation (3.7):  4 4 0 0   3 7 2 2      0  LN      N  0 1 2 1     0 3 3 8

Assuming LN     l1 , l2 , l3 , l4  , then:

Figure 2. An absorbing CTMC.

JSEA

An Extension to Pi-Calculus for Performance Evaluation

14

 4l1  3l2  1  4l  7l  l  3l  0  1 2 3 4  l  l  l  2 2 3 0 3 4  2 2l2  l3  8l4  0

ous reward rate is the summation of each state reward multiplying its according steady-state probability, i.e.: E  Z      E  rX       ri i     ri  i i

Solving the above linear system, LN     1.0625, 1.08, 1.83, 0.5 and MTTA   Li    17 / 16  13 / 12  11 / 6  1 / 2  4.4725. iN

3.2. Markov Reward Models Markov reward models provide a framework to integrate the performance measures with Markov chains. Computing the steady-state probability vector is not sufficient for the performance evaluation of a Markov model. A weight needs to be assigned to each state or system transition to calculate the final performance. These weights, also known as rewards, are defined according to specific system requirements and could be time cost, system availability, reliability, fault tolerance, and so forth. Let the reward rate ri be assigned to state Si , then a reward ri t is accrued during a sojourn time t in state Si . Before proceeding, some definitions are presented from [16]: Definition 3.9 Instantaneous Reward Rate Let  X  t  , t  0 denote a homogeneous finite-state CTMC with state space  . The instantaneous reward rate of CTMC at time t is defined as: Z  t   rX  t 

(3.8)

Based on this definition the accumulated reward in the finite time period [0, t] is: t

t

0

0

Y  t    Z  x  dx   rX  x  dx

(3.9)

Furthermore, the expected instantaneous reward rate and the expected accumulated reward are as follow: E  Z  t    E  rX  t     ri  i  t 

(3.10)

i

  E Y  t    E   rX  x  dx    ri Li  t  (3.11) 0  i where   t  is the probability vector at time t, and Li  t  denotes the expected total time spent in state Si during the interval [0, t ). When t   we have: t

E  Z      E  rX       ri ri   

(3.12)

i

  E Y      E   rX  x  dx    ri Li    (3.13) 0  i Especially, when the unique steady-state probability vector  of the CTMC exists, the expected instantaneCopyright © 2011 SciRes.

(3.14)

i

For an absorbing CTMC the whole system will eventually enter to an absorbing state. So for an absorbing CTMC, it is logical to compute the expected accumulated reward until absorption as follows:   E Y      E   rX  x  d x    ri Li    0  iN

(3.15)

where N is the set of non-absorbing states.

4. Performance Evaluation of Systems Modeled in Pi-Calculus As mentioned before, the most common way to evaluate the performance of a system modeled in pi-calculus is to transfer the model to a CTMC and run the steady state analysis followed by a reward model to calculate the accumulated reward and hence the performance of the system. The major task in this regard is to assign appropriate transition rates to the Markov model. Previous literature suggests two different approaches. In the first approach, the transition rates are given explicitly for each action. This approach modifies the syntax of the pi-calculus and extends it to a stochastic pi-calculus [6-14]. The second approach [17-18] leaves the syntax of the calculus unchanged and calculates the transition rates based on an abstract cost function assigned to each action. Both of these approaches have their downsides, as previously stated in section1. In this paper, instead of a cost function, we introduce a more concrete transition rate function which estimates the transition rate for each reduction rule in pi-calculus.

4.1. Transition Rate Functions In order to calculate the transition rates (qij) of a CTMC derived from a pi-calculus model, we define a transition rate function, TR, for each transition rule in higher order pi-calculus, listed in Table 2. Definition 4.1 (Transition Rate Function): Let  be the set of transition rules from Table 2. The transition rate function is defined as TR:   R  , where TR ( i )  1/ ti and ti is the duration of the transition initiated by the reduction rule 1 . In what follows the transition rate functions are defined for the higher order pi-calculus reduction rules.   TAU:   .P  P  1 = TR i  P i  i .P 

JSEA

An Extension to Pi-Calculus for Performance Evaluation

Here  i is an internal action. Different internal actions may have different execution times, so i represents the execution time for action  i .  SUM:

 P  P'  P  Q  P'

  P  P'  Similar to TAU, TR    P'  P  Q 





i  TR P  P' 

1

i



P  P'  P Q  P' Q

 PAR:

  P  P'  1 i P'  TR    TR P    P Q  i P' Q  



 MATCH:



 P  P'  P'  x  x  P 

i   P' P  TR    i   x  x  P   P '  



i TR  x  x  P  P'



15

information), the time to execute the routing algorithm, and the time to establish an interface between the local node and the router. This delay is incurred only once for a single message transfer [20]. k is the total hops that a message traverses (not counting the sender and the receiver nodes). th , called Per-hop time, is the time cost at each network node. After a message leaves a node, it takes a finite amount of time to reach the next node in its path. The time taken by the header of a message to travel between two directly-connected nodes in the network is called the per-hop time. It is also known as node latency. The per-hop time is directly related to the latency within the routing switch for determining which output buffer or channel the message should be forwarded to size  y  represents the size of the message y . In current parallel computers, the per-hop time, th , is quite small. For most parallel algorithms, it is less than size  y  bw  x  even for small values of m and thus it can be ignored. As an example, suppose the sender begin to send 200K  size  y   message to the recipient in Figure 3. The bandwidth of the network is 100 K/second and the sender startup time is 1.5 second. At each hop, the latency  th  is 1 second. So the total transfer time t is calculated as:

= 1/(time(  x  x  ) + time

 P  P '  size  1x   

t  tstartup 

size  y  bw  x 

*  k  2   th * k  1.5 

 200 100    3  1  1  3  12.5

i

sec ond

i

The time needed for the matching operation, depends on the size of the data to be compared, size(x).  COM-NAME:

  Q   Q'  ' ' P Q   P Q  y / z

xy P   P'

x z

Assuming that the network connection is perfect and no errors occur during sending and receiving messages, we have: x  z  xy  P   P'  Q'  Q   TR   ' '   / z    P Q P Q y   

1

 tstartup 

size  y  *  k  1  th * k bw  x 

where bw  x  is the bandwidth of channel x, i.e., the total amount of information that can be transmitted over channel x in a given time. When a large number of communication channels are using the same network, they have to share the available bandwidth. tstartup is the startup time, the time required to handle a message at the sending node. This includes the time to prepare the message (adding header, trailer, and error correction Copyright © 2011 SciRes.

And its corresponding transition rate is 1/t = 1/14.5 = 0.08 - CLOSE-NAME:   P   P' x w

  Q   Q' x w



 P Q    w P' Q'



  x  w  x  w   P'  Q'  P  Q  TR       w P' Q' P Q    1  size  y  *  k  1  th * k tstartup  Check  w   bw  x  Check  w  is the time needed to perform a name check ~ does not occur free in Q. Check  w  to verify that w is in the order of the number of free names in Q. - COM-PROCESS:



 vy  x

K



x K

P  P' Q  Q '  P Q   vy  P ' Q '  x K  vy  x K  P  P' Q '  Q    TR     vy  P ' Q '  P Q   

JSEA

An Extension to Pi-Calculus for Performance Evaluation

16

 Marshalling( K )  t startup

1    )  Kdata  Kstate size( Kcode * (k  1)  th * k  Unmarsh( K )  bw( x)

When a process is moved to a remote computer, its code, data, and, sometimes, its state of execution are transferred so that it can resume its execution after mi   gration [21]. Kcode , Kdata , Kstate are the code, data and the state of process K , respectively. Marshalling  K  is the time needed to convert K into a standard format (byte-stream) before it is transmitted over the network. Unmarshalling  K  is the reverse process of converting the byte-stream to process K , after migration. With the transition rate functions defined above, a system modeled in pi-calculus can be transformed into a CTMC and hence its performance measures are computed by following the standard numerical methods of CTMC, discussed in Section 3. Generally, the steps needed to be taken for CTMC-based performance evaluation of a system described in pi-calculus or its extensions are summarized as follow: 1) Forming the state diagram of the system. To do so, we consider the initial pi-calculus specification to represent the initial state of the system. From there every reduction applied to the specification would produce a next possible state. Similarly the state transition diagram is generated by applying all possible reductions. 2) Using the state transition rate functions to calculate the transition rates for each pair of states. We define the transition rate, qij, between state Si and Sj, based on the transition rate function: TR    , if S j is derived inmediately from  Si by applying the reduction rule    (4.1) qij    qij , i  j  j  0, otherwise

3) By associating each rate with its appropriate transition in the state diagram, the CTMC diagram would

Figure 3. Message sending and receiving across the network [20]. Copyright © 2011 SciRes.

be formed and the infinitesimal matrix is generated. Note that the transition rates are independent of the time (It only depends on the transition rule); hence the corresponding CTMC is time-homogenous. To ensure that the resulting transition system conforms to the memory-less property, we assume that the sojourn time of a state is exponentially distributed with parameter 1

 qij

[16].

j

That is:

prob  sojourn  Si   t   1  ei t

where i  1

(4.2)

 qij j

4) If the resulting CTMC is an ergodic Markov Chain, then a unique steady state probability vector exists and is found by solving Equation (3.4). If a state based Markov reward model is used, then the final performance value n

of the system would be equal to:

 ri i , where i 1

ri and

 i are the reward rate and the steady probability for state Si , respectively.

5. Conclusion and Future Work Although the pi-calculus family of formal languages is vastly utilized, it does not provide any native theoretical mechanisms for performance evaluation of different design approaches for a particular system. This article presented a Markov-based performance evaluation methodology for systems specified in pi-calculus. Such methodology can be applied at the design stage to assess the performance of a system prior to its implementation and introduce a benchmark to compare different design schemes against one another. The previous works [22-24] on integrating performance measures with process algebra have either introduced additional computational overhead by extending the calculus to stochastic pi-calculus or proposed cost functions that are too abstract. We defined concrete transition rate functions with necessary features for real world applications. Bearing in mind that the performance of a multi-process system mainly relies on the network communication cost, our transition rate functions mostly revolves around the time performance measures for sending and receiving processes. In addition, we also accounted for the overhead of process mobility in our transition functions.

JSEA

An Extension to Pi-Calculus for Performance Evaluation

The future studies should mainly focus on the scalability of the methodology to large complex systems with huge number of states and transition rates. Many approaches are proposed in literature to address the problem of state explosion in Markov chains [25-27].The incorporation of these approaches with the performance evaluation of picalculus models remains as a future work.

REFERENCES [1]

R. Milner, J. Parrow and D. Walker, “A Calculus of Mobile Processes—Part I and II,” LFCS Report 89-85, University of Edinburgh, Edinburgh, 1989.

[2]

R. Milner, “Communicating and Mobile Systems: The  -Calculus,” Cambridge University Press, Cambridge, 1999.

[3]

R. Milner, “The Polyadic Pi-Calculus: A Tutorial,” Technical Report ECSLFCS -91-180, Computer Science Department, University of Edinburgh, Edinburgh, 1991.

[4]

D. Sangiorgi, “The  -Calculus: A Theory of Mobile Processes,” Cambridge University Press, Cambridge, 2001.

[5]

D. Sangiorgi: “Expressing Mobility in Process Algebras: First-Order and Higher-Order Paradigms,” Ph.D. Thesis, University of Edinburgh, Edinburgh, 1993.

[6]

C. Priami, “Stochastic  -Calculus,” Computer Journal, Vol. 38, 1995, pp. 578-589. doi:10.1093/comjnl/38.7.578  tz, U. Herzog and M. Rettelbach, “TIPP-A LanN. G  guage for Timed Processes and Performance Evaluation,” Technical Report 4/92. IMMD VII, University of Erlangen-Nurnberg, Erlangen, 1992.

[7]

[8]

J. Hillston, “A Compositional Approach to Performance Modelling,” Ph.D. Thesis, University of Edinburgh, Edinburgh, 1994.

[9]

M. Bernardo, L. Donatiello and R. Gorrieri, “MPA: A Stochastic Process Algebra,” Technial Report UBLCS94-10, University of Bologna, Bologna, 1994.

[10] P. Buchholz, “On a Markovian Process Algebra,” Techinal Report Informatik IV, University of Dortmund, Dortmund 1994.

17

Systems. Part II: Process Algebra,” Information and Computation, Vol. 203, No. 1, November 2005, pp. 39-74. doi:10.1016/j.ic.2005.07.002 [15] S. M. Ross, “Stochastic Processes,” 2nd Edition, Wiley, New York, 1996. [16] G. Bolch, S. Greiner, H. De Meer and K. S. Trivedi, “Queuing Networks and Markov Chains: Modelling and Performance Evaluation with Computer Science Applications,” Wiley, New York, 1998 [17] C. Nottegar, C. Priami and P. Degano, “Performance Evaluation of Mobile Processes Via Abstract Machines,” IEEE Transactions on Software Engineering, Vol. 27, No. 10, 2001, pp. 867-889. doi:10.1109/32.962559 [18] F. Logozzo, “Pi-Calculus as a Rapid Prototype Language For Performance Evaluation,” Proceedings of the ICLP 2001 Workshop on Specification, Analysis and Validation for Emerging Technologies in Computational Logic (SAVE 2001), 2001. [19] S. Rahimi, M. Cobb, D. Ali, M. Paprzycki and F. Petry, “A Knowledge-Based Multi-Agent System for Geospatial Data Conflation,” Journal of Geographic Information and Decision Analysis, Vol. 6, No. 2, 2002, pp. 67-81. [20] A. Grama, G. Karypis, V. Kumar and A Gupta, “An Introduction to Parallel Computing: Design and Analysis of Algorithms,” 2nd Edition, Addison Wesley, Reading, 2003. [21] W. R. Cockayne and M. Zyda, “Mobile Agents,” Manning Publications Company, Greenwich, 1998. [22] H. Samet, “The Design and Analysis of Spatial Data Structures,” Addison-Wesley, Reading, 1989. [23] S. Rahimi, “Using Api-Calculus for Formal Modeling of SDIAgent: A Multi-Agent Distributed Geospatial Data Integration,” Journal of Geographic Information and Decision Analysis, Vol. 7, No. 2, 2003, pp. 132-149. [24] S. Rahimi, J. Bjursell, M. Paprzycki, M. Cobb and D. Ali, “Performance Evaluation of SDIAGENT, a Multi-Agent System for Distributed Fuzzy Geospatial Data Conflation,” Information Sciences, Vol. 176, No. 9, 2006. doi:10.1016/j.ins.2005.07.009

[11] L. de Alfaro, “Stochastic Transition Systems,” Proceedings of Ninth International Conference on Concurrency Theory (CONCUR’98), Vol. 1477, 1998, pp. 423-438. doi:10.1007/BFb0055639

[25] S. Derisavi, P. Kemper and W. H. Sanders, “Symbolic State-Space Exploration and Numerical Analysis of State-Sharing Composed Models,” Linear Algebra and Its Applications, Vol. 386, July 2004, pp. 137-166. doi:10. 1016/j.laa.2004.01.006

[12] J. Markovski and E. P. de Vink, “Performance Evaluation of Distributed Systems Based on a Discrete Real- and Stochastic-Time Process Algebra,” Fundamenta Informaticae, Vol. 95, No. 1, October 2009, pp. 157-186.

[26] S. Derisavi, H. Hermanns and W. H. Sanders, “Optimal State-Space Lumping in Markov Chains,” Information Processing Letters, Vol. 87, No. 6, 2003, pp. 309-315. doi:10.1016/S0020-0190(03)00343-0

[13] A. Clark, S. Gilmore, J. Hillston and M. Tribastone, “Stochastic Process Algebras,” SFM’07 Proceedings of the 7th International Conference on Formal Methods for Performance Evaluation, 2007, pp 132-179.

[27] P. Buchholz, “Efficient Computation of Equivalent and Reduced Representations for Stochastic Automata,” International Journal of Computer Systems Science & Engineering, Vol. 15, No. 2, March 2000, pp. 93-103.

[14] P. R. D’Argenio and J. Katoen, “A Theory of Stochastic

Copyright © 2011 SciRes.

JSEA