Towards Dynamic Meta Modeling of UML Extensions: An Extensible ...

0 downloads 0 Views 169KB Size Report
Towards Dynamic Meta Modeling of UML Extensions: An Extensible Semantics for UML Sequence Diagrams. Jan Hendrik Hausmann, Reiko Heckel and Stefan ...
Towards Dynamic Meta Modeling of UML Extensions: An Extensible Semantics for UML Sequence Diagrams Jan Hendrik Hausmann, Reiko Heckel and Stefan Sauer University of Paderborn, Department of Mathematics and Computer Science D-33095 Paderborn, Germany corvette  reiko  sauer  @uni-paderborn.de

Abstract The Unified Modeling Language (UML) still lacks a formal and commonly agreed specification of its semantics that also accounts for UML’s built-in semantic variation points and extension mechanisms. The semantics specification of such extensions must be formally integrated and consistent with the standard UML semantics without changing the latter. Feasible semantics approaches must thus allow advanced UML modelers to define domain-specific language extensions in a precise, yet usable manner. We have proposed dynamic meta modeling for specifying operational semantics of UML behavioral diagrams based on UML collaboration diagrams that are interpreted as graph transformation rules. Herein we show how this approach can be advanced to specify the semantics of UML extensions. As a case study we specify the operational semantics of UML sequence diagrams and extend this specification to include features for modeling multimedia applications. Keywords: UML semantics, extension mechanisms, multimedia, graph transformation, dynamic meta modeling

1 Introduction The Unified Modeling Language (UML; [17]) has become the OMG standard in (object-oriented) software modeling. It promises an intuitive understandability by using diagram languages to represent structure and behavior of a software system. Nevertheless, it still lacks a precise and commonly agreed specification of its semantics. This hinders its use in practice since much time is wasted on clarifying a common understanding of the modeling artifacts. Thus, it is commonly agreed that a precise definition of UML’s semantics is needed. Such a semantics specification must not only be formal, but also understandable by advanced modelers and tool developers. Since the UML is designed to be a general-purpose

modeling language, there is no single, generally applicable method or development process. As a consequence, many semantic questions are (intentionally) left open in order to not constrain the application of the UML. However, in order to be successful in specific domains, the language needs to be specialized and supplemented with methodical guidelines. For this purpose, the UML provides extension mechanisms to define stereotypes of existing model elements, and to define tagged values and constraints for well-formed models under such extensions. These (syntactic) extension mechanisms enable the creation of UML dialects, so-called profiles [16], which are tailored to specific domains of software engineering, like the modeling of real-time systems, business processes, etc. On a smaller scale, a similar strategy has been identified for defining the syntax and semantics of the standard (general-purpose) language as an extension of a UML core [5]. Each syntactic language extension has to be accompanied by a specification of its semantics. This specification should, incrementally, describe the semantics of the new language elements while preserving the meaning of the original language. This can be done, for example, by projecting the new language constructs back to predefined model patterns in the core language, or by extending the semantics specification to describe original behavior for the new elements. In general, the techniques used for semantics specification must support the specialization and extension of specifications. Furthermore, the definition of UML extensions should be possible down to the project level. Thus, it is done rather by advanced modelers than by specialized semantics experts. This stresses the need for understandable and manageable specifications. The approach of dynamic meta modeling (DMM; as introduced in [4]) is intended for defining a precise and understandable semantics of standard UML. In this paper, we will demonstrate its suitability to specify the semantics of language extensions. As a case study, we present a specification of UML sequence diagrams and introduce a small extension for the modeling of multimedia applications (de-

rived from the OMMMA approach [21]). In comparison to existing approaches towards the formalization of UML collaborations/sequence diagrams (e.g., [13, 18]), dynamic meta modeling employs UML notation to present the (dynamic) semantics of behavioral diagrams. This approach has already been used to define the abstract syntax of the UML, claiming that by meta modeling a wider audience would be received (virtually every advanced UML user, independent of a training in formal methods) than by using established theoretical notations. In contrast to semantics specifications based on UML class diagrams and OCL constraints for defining pre- and postconditions, which can be regarded as meta modeling as well, dynamic meta modeling only deploys UML’s visual diagrams and is thus more accessible. The presentation in this paper continues with an introduction to the approach of dynamic meta modeling for the specification of UML behavioral semantics in Sect. 2. We then deploy dynamic meta modeling to specify the semantics of interactions modeled within standard UML sequence diagrams. Section 4 describes an extended notion of sequence diagrams featuring, for example, a construct for synchronization within multimedia presentations. Then, in Sect. 5, we show how the behavior of this extension can be specified by a combination of projection semantics and the introduction of new semantic rules. We conclude by summarizing the current achievements and sketching future perspectives of this work.

       )    *         !

                               !

        

       

"                 $  %%& ' (!

#          

Figure 1. DMM rule describing an asynchronous method call

2 Dynamic Meta Modeling

Dynamics of the model parts are defined by meta operations on the elements of the abstract syntax of UML, the meta model. The behavior of such a meta operation is defined by operational semantic rules presented as UML collaboration diagrams. These collaboration diagrams combine the representation of the changes between pre- and post-state of the meta operation by labeling elements with the standard constraints + new , and + destroyed , . If other meta operations are employed to achieve the intended result, they are depicted as messages. We thus speak of dynamic meta modeling, since we define UML semantics by employing UML diagrams.

The definition of the UML contains four parts: the concrete syntax of the UML strongly depends on the tools implementing the specification, therefore only standard notations and presentation options are given. The abstract syntax of the UML is given in terms of UML class diagrams which form the meta model of the UML. This meta model contains meta classes and meta associations interrelatinging them. The instantiations of this meta model form the set of all UML models. While the syntax is thus well defined and unambiguous, the semantics are defined rather loosely. While the constraints for the well-formedness are semiformalized using the Object Constraint Language (OCL), the dynamic semantics only consist of a natural language description of the intended meaning of the defined constructs and their cooperation. This kind of semantics is easily understandable, but contains incomplete, ambiguous, and contradictory parts. The approach of dynamic meta modeling has been developed to achieve a semantics description for UML which is precise and formal, yet easily understandable even for people unaccustomed with the field of formal semantics. It is based on the idea of decomposing the dynamics of a model into dynamics of the model parts.

An example for such a rule is given in Fig. 1. This rule describes the effects of the meta operation perform which is defined for the meta class CallAction (this information is notated in the head of the diagram). A CallAction represents the invocation of a method on another object. The elements participating in this rule are therefore the CallAction, the Object in which scope the CallAction is to be performed, and the Object on which the method is to be called (this is identified by a path expression in the CallAction). The communication between the objects is represented as a Stimulus. As this rule describes asynchronous communication, only the sending of the message (= creation of the Stimulus) is described. The constraint + new , on the Stimulus indicates that this element is created during the application of this rule. The labeled arrows indicate the invocation of other meta operations to achieve the goal of performing a CallAction. While eval target(scope, target) evaluates the path expression to the target object (see [4] for details), the meta operation create on the meta class Stimulus is merely a constructor to instantiate all necessary links to other objects. The order of execution of the meta operation calls is determined by labeling them with sequence numbers. Thus performing a CallAction consists

T U 7 2 0 3 4. 5

- /0 < < 4? 47 2

\ [ [ Z B C J B C%GPCQHNSPCVIXW J CQB RNSPE KYH

65 3 7 2 0 8 3 4. 5

- . //0 1 . 2 0 3 4. 5

Z [[

Z [[

- /0 < < 4? 47 2 @ . /7

GC H I C B Z [[ = 7 < < 0 > 7

A 8 3 4. 5

B C D C EF C B

GMLND&DOC%GPGPKQB J BC I C D C GGK B

9 3 4: ; /; < R D S EF R S K B

Figure 2. Excerpt from the UML meta model describing collaborations of the evaluation of the target object for the method call and the dispatching of a message towards the target object. Although the operational semantics rules are represented as UML collaboration diagrams, the specification is mathematically rigorous since collaboration diagrams are given a formal interpretation based on graph transformation rules (see, e.g., [1] for an introductory text) within our approach. In particular, they can be considered as a special form of graphical operational semantics (GOS) rules [2]. GOS rules are a generalization of Plotkin’s structured operational semantics (SOS) paradigm for the definition of (textual) programming languages and process calculi [19] towards graphs. We do now use this approach to specify the dynamic semantics of UML sequence diagrams.

3 A Dynamic Semantics for UML Sequence Diagrams Sequence diagrams in the UML represent the flow of messages between different objects. They are derived from Message Sequence Charts (MSC; [12]) and employ a similar notation. Sequence diagrams are used for different purposes in the UML. On the one hand, they can be used to model a possible course of behavior, e.g., a trace of messages in an instance of the model (at runtime), or they may be used as part of a specification that prescribes mandatory behavior of a system. In this paper, only the latter kind is of interest. In the UML specification [17], sequence diagrams (as well as collaboration diagrams which support, in addition, the specification of links between the interacting objects) are based on the meta model concept of Collaboration. The abstract syntax of collaborations is given in Fig. 2. A Collaboration is the description of a communication pat-

] ^`_ a#bYcedMf bYg g hYf

] i#jYjVg k l&mVdPk bYc

q r sst u v wx yyx z {|

] nQk oMj%g mep wx }wx y~ {|

 wt € x y y u  r s {|

wx }wx y~ {|

Figure 3. A sample sequence diagram displaying a communication pattern

tern. Since this pattern should be applicable to all objects meeting certain requirements, it contains ClassifierRoles as participants. These classifier roles permit to formulate requirements on the objects participating in an actual instance of this collaboration. A ClassifierRole is related to one or more base Classifiers which determine the type of the object. The set of messages grouped to an Interaction describes the communication pattern on the participating roles. There are two ordering associations on the meta class Message: the predecessor/successor edge connects messages which belong to the same action sequence, i.e., they originate from the same method execution. The activator edge connects each message to the calling message that initiated the corresponding method execution. Applied to the example in Fig. 3, this means that the messages refresh() and processInput() originating from the UIController are connected via a predecessor/successor edge, and both are connected to buttonPressed() via an activator edge. Typically, a sequence diagram displays all messages in the scope of the collaboration resulting from the initiating method call. Since we want to specify an object-oriented system, this global view on the interaction has to be decomposed in local behavior specifications for

® ¯ °°± ² ¯ ³ ± ´ µ¯ ¶ ·¸ ¹ ¸ ® º ´ ¸ »¯ ² ¼ ¸ ® ´ ½

‚ ƒ „ … † ‡  ‘ … † … ‹’ …  ‡ ‹ Œ  Ž  “ ” ‚ ƒ „… † ‡ ˆ‰ ƒ „… † ‡ œ  ž  Ÿ   œ ˆŠ ‡ ‹Œ  Ž  ¥ Ÿ¢ ¡ § ¤ ž © ª ž ¤ Ÿ¨ £ ˆ ˜ – ŽŽ™ † ‡ ‹‚ —

ˆ˜ ‚ ŽŽ– ƒ ‚ ‘ – ‡ ‹‚ —

® ¯ °°± ² ¯ ³ ± ´ µ¯ ¶ ¾¿ ¯ °°± ² ¯ ³ ± ´ µ¯ ¶

Í ÎÏ Ð Ñ Ò Ò Ó Ñ Ò Ò Ô Õ Ñ ÖÏ × Ø Ñ Ð Ù ÚÓ Û Ü

š ˆ… › … †  ‡ … “‚ ƒ „… † ‡ ” « ˆ¬ …  ‡ ‘‚ ­ “”

¾ À¶ ´ ¸ ³ ± ® ´ µ¯ ¶

œ ¡ œ  ¢  £ ¤  ¥ ¦ ¡  œ § ¤ Ÿ¨ £ ˆ‰ • … ‘ – ‡ ‹‚ —

Figure 4. DMM rule initiating the execution of a specification

each method of the participating objects/classes. This decomposition is easily carried out along the activator links. As a result we obtain sequences of messages that specify the observable behavior of one method. Applied to the sequence diagram in Fig. 3, this results in specifications for UIController.buttonPressed(), Display.refresh() and Application.ProcessInput(). Note that this decomposition may result in several specifications for a single method. In the example, two specifications for Display.refresh() can be derived. It can not be guaranteed that these different specifications are equivalent or mutually exclusive. Therefore, either a choice or an integration approach has to be applied to obtain a single method specification. In this paper, we employ a non-deterministic choice between possible specifications. For reasons of simplicity in this section, we restrict the communication between the participating classifier roles to the passing of asynchronous method calls (initiated by UML CallActions, represented by Stimuli). The operational semantics of asynchronous sequence diagrams is specified by the rules in Figs. 1, 4–7. For the goal of this paper, we dispense with additional constructs such as concurrency, guard conditions, loops, and conditional flow. Figure 4 depicts the reception of a call by an object. Preconditions for the application of this rule are the connection of a stimulus to the object (via a receiver edge) and the specification of the called method by a collaboration. The reception of a stimulus is then performed by executing the specified collaboration and deleting the stimulus (the rule for Stimulus.destroy() is not given here since it is a simple destructor). The execution of the collaboration consists of three steps as specified by the rules in Figs. 5–7. Note that no further information is given on the classifier role :Collaboration. In the case that several collaborations exist that describe one operation, a non-deterministic choice is performed by mapping one of them to the classifier role. The three steps are (1) to find the first message in the sequence of execution, (2) to process this and any following

¯ ² ¼¸ ® ´ ¾Ý ² ¼¸ ® ´

¸ ÃÃ ± Ä ¸

Á Â Á Æ ÇÈ É È Ê È ËËÌ Ç ¾ ¸ ÃÃ ± Ä ¸ Á Å Á

Figure 5. DMM rule identifying the first message in the scope of a collaboration

â Þ ææç ß Þ è ç ã éÞ ê ë ì èÞ â á í í î á í í ç ï á ðÞ ß à á â ã ñ î ò ó Þ ß àá â ã äå ß àá â ã õö øú ÿ ÷ û ÷   ÷ üú ý þ ÷ ÿ û 

ä  ç ææ â ã éÞ ê ô õö ÷ ø ù ú ø û üú ý þ ÷ ÿ 

â Þ ææç ß Þ è ç ã éÞ ê ä Þ ææç ß Þ è ç ã éÞ ê î ò äî á íí ç ï á ê ã á è ç â ã éÞ ê   

î  äî á íí ç ï á

Figure 6. DMM rule for processing the message specifications

messages, and (3) to finish with the last message. Finding the first message is accomplished by the rule in Fig. 5. The dashed and crossed out ellipse around the lower right part of the diagram represents a Negative Application Condition (NAC; [10]). The intention of this construct is to ensure that the specified structure does not occur in the context of the rule application. This means that the role of the message labeled m1 is not to be played by any message bearing a predecessor. Thus, only the first message in the scope of the collaboration is suitable for this role. The rule in Fig. 6 processes the first and any subsequent messages by performing the action specified in the message and advancing to the next message in order. The rule in Fig. 7 is basically the same, except for the fact that it deals with the last message of an activation by explicitly forbidding the occurrence of any successors (NAC in the lower right part). Thus all messages are processed in the specified order. As the messages are part of the specification they are not deleted after their processing (unlike a stimulus which ceases to exist after its reception). To process a message, the specified action has to be performed as described by the rule in Fig. 1.

              

! 

 "  #     $! % &

         

 3   4     ' ( ) * + ) , -+ . / ( 0 1 2

          3          ! % ! 

 " 

7899: 77; < ! 6 ! 

 " 

5        

Figure 7. DMM rule for processing the last specified message

4 A Multimedia Extension of UML Sequence Diagrams Following the description of syntax and dynamic semantics of UML sequence diagrams, in this and the following section we will present an extension for the modeling of multimedia presentation scenes. For this purpose, we consider sequence diagrams as they have been introduced for the language OMMMA-L (Object-oriented Modeling of MultiMedia Applications–The Language; [21]). OMMMA-L has been proposed as an extension of UML for the specification of interactive multimedia presentations. On an analysis level, an OMMMA model basically consists of =

=

a class diagram containing application classes that may be related to media classes originating from a predefined media type hierarchy and presentation classes for rendering media content,

=

statechart diagrams representing statemachines to specify dynamic, event-driven system behavior (interactive or reactive), e.g. in response to user interaction,

=

(extended) sequence diagrams modeling predefined sequences of presentation behavior, so-called scenes, and newly introduced presentation diagrams specifying the spatial layout of the presentation.

Like in the Model-View-Controller paradigm (MVC; [14]) for interactive software, OMMMA structures a model according to the roles objects play for the application’s overall functionality. In the case of MVC, Model objects define the functional core of an (interactive) component whereas View and Controller objects represent the perceivable behavior of the component with respect to output and input,

respectively. Referring to Fig. 3, the role :UIController is a Controller object, :Application is a Model object, and :Display is a View object. An OMMMA model thus distinguishes three basic stereotypes of objects: > application ? , > media ? , and > presentation ? . Application objects represent the semantic or domain entities of a multimedia application, such as a company video presentation, a movie on a specific subject, or a piece of multimedia course material. They correspond to Model objects of the MVC paradigm. An application object is associated with one or more media objects. Media objects represent the multimedia information of application objects. They encapsulate the (applicationindependent) media content, such as a video stream or file, and the media descriptor. Media objects are used to store and communicate the multimedia information to a human. The distinction between media objects and application objects is particularly important for flexible multimedia application design. Different media objects can be used to communicate the same semantic information. Presentation objects represent the entities of a multimedia application that are responsible for rendering the multimedia content supplied by media objects. Presentation objects correspond to View objects of the MVC paradigm. In cases of interactive media control, they are related to Controller objects (in MVC terminology) via their associated application object. The user interaction has thus been decoupled from the media presentation. Presentation objects appear on OMMMA-L presentation diagrams. Their position and size on the presentation diagram specifies where information is to be presented to a user. Presentation diagrams are stereotyped collaboration diagrams on which spatial constraints for the contained presentation objects are graphically specified. For the purpose of presenting the multimedia information to a user, objects of the three stereotypes must collaborate. It is the responsibility of an application object to coordinate and control the behavior of its assigned media and presentation objects as well as to communicate with other components of the surrounding application. In this paper we will focus on the pattern of interaction between application, media, and presentation objects. As we stated above, statechart diagrams are used to model event-driven, reactive behavior of a multimedia application, especially in response to user interaction. These statecharts are coupled with sequence diagrams that technically specify actions on state transitions or activities that are performed while in a particular state (e.g., entry or exit actions, or do activities). Such sequence diagrams represent predefined scenes that show a non-reactive behavior. Thus, any user interaction altering the presentation of a predefined scene, if permitted, must be specified within statechart di-

@ a bE G c B G H O H F I J dUL OY[ O`VH F AYF D GUHYe`G`HVF L G`C

@ A B B CDE A F DG H I J KML AND C O L

@ A B B CDE A F DG H I J P D QRONG

@ A B B CDE A F DG H I J S GUTUHVQXWYHNZYC D [ \

[ F A L F^] _ [ F A L F^] _

[ F A L F^] _

depicted as a bold bar between the activations in the example in Fig. 8. The intended interpretation of the two synchronization bars is that the presentations of the sound and the video should start and end at the same time. In the following section, we specify the behavior of these extensions.

5 The Dynamic Semantics of OMMMA-L Sequence Diagrams [ F GUB`] _

Figure 8. Multimedia sequence diagram containing synchronization

agrams. In the following we focus on the specification of predefined scenes within sequence diagrams. According to the OMMMA approach, the only objects of the three stereotypes shown on an analysis-level sequence diagram are application objects. On this level of abstraction, the modeler just specifies when to present the information corresponding to the semantic entities. It is not necessary to model the corresponding activation and deactivation of associated media and presentation objects. Nevertheless, an understanding of this mechanism is crucial for the precise specification of the semantics as will be shown in the following. Like the term multimedia implies, a typical scene of a multimedia application will include several application objects controlling the sequential or simultaneous presentation of multimedia content. UML sequence diagrams are only partially suited to model scenes of this kind. The example in Fig. 8 shows some of the special features required. A simple presentation scene (like it may occur when playing a DVD) is displayed. A trailer is presented followed by the main video accompanied by a sound track (consisting of speech and sound). As the video may be combined with sound tracks in different languages, a sound track is regarded as an individual semantic entity and not just a media (stream) object. Thus, sound tracks are modeled as individual application objects. In OMMMA-L sequence diagrams, activations of application objects model their actual presentation to the user. It is crucial for a multimedia presentation that certain requirements concerning synchronization are met. In this example, the video and sound track have to be played in unison. Normal sequence diagrams do not provide an adequate mechanism to ensure a synchronous execution of activations. In particular, as we shall see in the next section, such a semantics is not captured by synchronous or asynchronous messages in standard UML sequence diagrams. Thus a new modeling construct, the synchronization is introduced. It is

First, we conduct a more detailed analysis on how the extended semantics of OMMMA-L sequence diagrams can be specified incrementally to the standard semantics shown in Sect. 3. The idea is to specify new semantics sparsely, i.e., only when the modeling construct at hand is original and of general nature, and to realize all other features by projecting them to patterns of the standard language. Thus, we start inspecting the introduced constructs whether they really contain new semantics, or if they are only syntactic abbreviations. We already emphasized that the intended meaning of activations of OMMMA-L application objects differs from the normal interpretation. As application objects do not contain the real presentation of the media objects, but rather act as a controller for the presentation objects, they do not need to be active during the whole presentation. They rather (asynchronously) initiate the presentation and finish their activation to be ready for further interaction with other components of the application. The activation that is presented in the OMMMA-L sequence diagram constitutes a more abstract, logical view: it rather shows the activation of the presentation object that overlays that of the application object. Figure 9 explains this difference by showing the hidden pattern of interaction between application and presentation objects. It is thus shown that activations of application objects do not imply special semantics, but are instead a syntactic abbreviation in the sense that the common notation (concrete syntax) for an activation is mapped to a complex pattern of interaction in the abstract syntax, i.e., the UML meta model. Therefore, no new semantic rules have to be defined for this construct. More formally, this kind of pattern expansion can be specified by transformation rules on diagrams as suggested, for example, in [7]. Synchronous execution, on the other hand, shall not be derived from combining several syntactical elements of the base language. As different objects are involved in a synchronization, a communication has to happen between the objects. In standard UML, this can be either asynchronous communication which does not force the immediate processing of the synchronization message by the receiver, or synchronous communication which will render the sender inactive for the time of the execution on the receiver object. Thus, in both cases synchronous activation of the objects is not guaranteed. We therefore need synchronization as a new

f g h h ijk g l jm n o p q `h hYrXi gYs s

f g h h ijk g l jm n o p q `h hYrXi gYs s

s l gwt l^x y

s l gwt l^x y

f h tu s u n l g l jm n o p v t uYs u`nVl uUt rXi gYs s

 { ƒƒ„ | { …„ € †{ ‡ ˆ ‰ …{  ~ Š Š

‹ ~ Š Š „ Œ ~ { | } ~  € Ž‹  

{ | }~  € ‚ | }~  € s l gUt l^x y

 Ÿ „ … „ ƒ ƒ~ ƒ  „ ƒƒ¡  € †{ ‡

‘w’ “`• —wœY”`´ ´µ˜¶”`´ ´ ·U¸U”¥™ —RšY› ”`œY^¹ ˜»ºMž

‘ ’“ ” •– — • ˜ ™— š › ” œ  ž

¢{£ƒ ƒ „¤|M{£…V„¥€V† {£‡¦  ¦{£ƒ ƒ „¦|M{£…^„§€V† {£‡

=

‹  ‹ ~ Š Š „ Œ ~ uRnVzXx y

¨ ‡¥€V~£…V„M§€V† {£‡ ®¯ °°± ®®² ³ ‹ª©«^‹¬~­ŠwŠU„­ŒX~

Figure 9. Pattern of interaction between application and presentation objects that is hidden within the activation of the application object on an OMMMA-L sequence diagram

Figure 10. DMM rule for initiating a parallel invocation Ä Å ÆÇ À Á ȼ É Ê Ä ¿ Æ Ë ¼ Æ Ì Í¼ ½ ¾ ¿ À Á Î

construct with special semantics.1 We decided to model the synchronous execution of methods semantically by parallel method invocations based on a master-slave concept. The parallel invocation of operations is described in the rules depicted in Figs. 10 and 11. The sending of a parallel operation call is specified using a newly introduced meta class ParallelCallAction. We also introduce a new concept for simultaneous execution of meta operations. By giving the two meta operation calls Collaboration.process message(object,m2) and ParallelCallAction.perform(object) the same sequence number, the simultaneous execution of these calls is expressed. As identical sequence numbers are not defined in standard UML, the new feature is clearly distinct from the standard UML semantics of nested invocation (sequence numbers with identical prefix) and concurrent or independent invocation (sequence numbers with attached characters). This feature enables us to enforce the desired synchronization at the model level. The execution of a ParallelCallAction is specified by the rule in Fig. 11. Unlike an asynchronous CallAction, a ParallelCallAction is not communicated via a Stimulus, but rather by direct invocation of the method (as specified by the collaboration). If the first message of the slave object is a synchronization message again, we get a chain of synchronization requests which result in the parallel activity of all objects. Thus the semantic formalization of the synchronization construct of the OMMMA-L sequence diagrams is complete. With respect to our aim of an incremental specification preserving the semantics of the base language, please note that none of the original semantic rules introduced in Sect. 3 has been replaced or modified. Moreover, the new rules specify only the behavior of new meta classes, i.e., they 1 Of course, it is possible to implement synchronization by message passing. However, this requires to make assumptions, e.g., about the time of delivery of messages and the preservation of the message order which cannot, in general, be justified.

¼ ½ ¾ ¿ À Á Âà ½ ¾¿ À Á

Ä Å Æ Ç À Á ȼ É Â Ä Å Æ Å ÏÏ¿ ÏÐ Å ÏÏÇ À Á ȼ É ÑXÒwÓXÔ ÕUÖ × ÑXØÚÙÛÑXÒ

Ó à Ó á â Ö Ó ãÖ Õ Ô ß Ó Ö ä ÖÕ Ôß Ó Ö Á Å ÆÜ ¿ Á Âà ½ ¾¿ À Á

À ¼ ÏÏÅ ½ ¼ Æ Å Á ȼ É ÂÐ ¼ ÏÏÅ ½ ¼ Æ Å Á ȼ É

ÔÓ Ò ÔÓ åÓ Ø ÖÓ æ ç Ò

¼ Ä ¿ Æ Å Á ȼ É ØwÕRÝ»ÓÞÙÛÑXÒ

Ñ Ò Ó Ô Õ Ö ×Ñ Ø

Figure 11. DMM rule for processing a parallel invocation

do not extend the meta operations of the base language. As a consequence, we may conclude that our language extension is, in fact, conservative in the sense that standard UML sequence diagrams (i.e., without multimedia features) have the same behavior under both the original and the extended semantics.

6 Conclusion In this paper, we have outlined an approach to the incremental specification of the semantics of diagram languages based on the paradigm of dynamic meta modeling [4]. As a case study, we have shown a conservative extension of UML sequence diagrams for the modeling of multimedia presentations. Abstracting from the concrete example, the precise syntactic constraints which ensure good behavior of semantic extensions are still to be investigated. We hope to find formal support in the theory of refinement and modularity of graph transformation [11, 22, 9] and structured operational

semantics [8, 15], the two roots of the GOS approach [2] which forms the formal background of our work.

[12] ITU-TS. Recommendation z.120: Message sequence chart (MSC) – Annex B: Algebraic semantics of message sequence charts, ITU-TS, Geneva 1995.

References

[13] A. Knapp. A formal semantics of UML interactions. In France and Rumpe [6], pages 116–130.

[1] M. Andries, G. Engels, A. Habel, B. Hoffmann, H.J. Kreowski, S. Kuske, D. Plump, A. Sch¨urr, and G. Taentzer. Graph transformation for specification and programming. Science of Computer Programming, 34:1–54, 1999. [2] A. Corradini, R. Heckel, and U. Montanari. Graphical operational semantics. In Rolim et al. [20], pages 411– 418. [3] H. Ehrig, G. Engels, H.-J. Kreowski, and G. Rozenberg, editors. Proc. 6th Intl. Workshop on Theory and Application of Graph Transformation (TAGT’98), volume 1764 of LNCS. Springer-Verlag, 2000. [4] G. Engels, J.H. Hausmann, R. Heckel, and St. Sauer. Dynamic meta modeling: A graphical approach to the operational semantics of behavioral diagrams in UML. In A. Evans, S. Kent, and B. Selic, editors, Proc. UML 2000 – Advancing the Standard, volume 1939 of LNCS, pages 323–337. Springer-Verlag, 2000. [5] A. Evans and S. Kent. Core meta modelling semantics of UML: The pUML approach. In France and Rumpe [6], pages 140–155. [6] R. France and B. Rumpe, editors. Proc. UML’99 – Beyond the Standard, volume 1723 of LNCS. SpringerVerlag, 1999. [7] M. Gogolla. Graph transformations on the UML metamodel. In Rolim et al. [20], pages 359–371. [8] J.F. Groote and F. Vandraager. Structured operational semantics and bisimulation as a congruence. Information and Computation, 100:202–260, 1992. [9] M. Große-Rhode, F. Parisi-Presicce, and M. Simeoni. Refinement of graph transformation systems via rule expressions. In Ehrig, Engels, Kreowski, and Rozenberg [3], pages 368–382. [10] A. Habel, R. Heckel, and G. Taentzer. Graph grammars with negative application conditions. Fundamenta Informaticae, 26(3,4):287 – 313, 1996. [11] R. Heckel, A. Corradini, H. Ehrig, and M. L¨owe. Horizontal and vertical structuring of typed graph transformation systems. Math. Struc. in Comp. Science, 6(6):613–648, 1996.

[14] G.E. Krasner and S.T. Pope. A cookbook for using the model-view-controller user interface paradigm in Smalltalk 80. Journal of Object-Oriented Programming, 1(3):26–49, 1988. [15] P.D. Mosses. Foundations of modular SOS. Technical Report RS-99-54, BRICS, Dept. of Computer Science, Univ. of Aarhus, 1999. [16] Object Management Group. Analysis and design platform task force – white paper on the profile mechanism, April 1999. http://www.omg.org/pub/ docs/ad/99-04-07.pdf. [17] Object Management Group. UML specification version 1.3, June 1999. http://www.omg.org. ¨ [18] G. Overgaard. A formal approach to collaborations in the Unified Modeling Language. In France and Rumpe [6], pages 99–115. [19] G. Plotkin. A structural approach to operational semantics. Technical Report DAIMI FN-19, Aarhus University, Computer Science Department, 1981. [20] J. D. P. Rolim et al., editors. Proc. ICALP Workshops 2000, Geneva, Switzerland. Carleton Scientific, 2000. [21] St. Sauer and G. Engels. Extending UML for modeling of multimedia applications. In M. Hirakawa and P. Mussio, editors, Proc. IEEE Symposium on Visual Languages (VL’99), pages 80–87, September 13-16, 1999. [22] A. Sch¨urr and A.J. Winter. UML packages for PROgrammed Graph REwrite Systems. In Ehrig, Engels, Kreowski, and Rozenberg [3], pages 396–409.

Suggest Documents