Model metamorphosis - Software, IEEE - IEEE Xplore

2 downloads 85 Views 252KB Size Report
Model Metamorphosis. The Object Management Group's Model Driven Architecture. (www.omg.org/mda) is a novel approach to separating business logic from ...
focus

model-driven development

Model Metamorphosis Torben Weis, Andreas Ulbrich, and Kurt Geihs, Berlin University of Technology

he Object Management Group’s Model Driven Architecture (www.omg.org/mda) is a novel approach to separating business logic from the underlying platform technology. Using MDA, developers create a platform-independent model. A platform-specific model is derived from the PIM to target a specific technology such as Corba, Enterprise JavaBeans, or .NET. Model transformation bridges the gap between the PIM and PSM. In the worst case, developers must manually

T

perform this transformation, and they’ll likely think that the PIM decreases their productivity. So, an automatic transformation from a PIM to a PSM is needed. To help automate this process, we created Kafka, a rule-based transformation language for visual programming, and the Kase modeling tool. With Kafka and Kase, developers don’t need detailed knowledge of metamodels to construct MDA transformations.

MDA transformation requirements MDA is potentially advantageous because it shifts complexity away from developers and into the tool chain and, hence, the PIM-toPSM transformation. We’ve gained experience with MDA in a project with three industrial partners where developers could attach quality-of-service contracts to PIM components.1,2

Model transformations are critical to the Model Driven Architecture’s success. Kafka, a rule-based visual programming language, greatly simplifies the implementation and customization of model transformations. 46

IEEE SOFTWARE

Published by the IEEE Computer Society

The transformation had to take care of the QoS contracts and map them to appropriate realizations in the PSM. So, the knowledge of how to realize a certain QoS contract shifted mainly to the transformation. If developers performed the transformations manually, MDA would not really simplify software development; it would just arrange the work differently. In contrast, an automated transformation could simplify development because it captures common practice and know-how. Experience with our industry partners has revealed that a generic “one serves all” transformation is infeasible. Although the three development teams used the same PIM and PSM metamodels, their requirements for the transformation were quite different. Each team applied its favorite design patterns to model and implement certain problems. Transformations must take this into account. Developers won’t throw away all their common practice just because some transformation insists on different implementation strategies. They would rather discard the transformation from their tool chain. This implies that every team should be able to build a new transformation or at least 0740-7459/03/$17.00 © 2003 IEEE

adapt an existing transformation according to its specific needs. Therefore, Kafka lets teams easily customize transformations without in-depth knowledge about the universe of technologies and standards that developers usually must master to implement model transformations. Developers must know that a model is a graph typed by a metamodel. The metamodel itself is a model— that is, a graph typed by a meta-metamodel— the MOF (Meta-Object Facility). Taking into account that we can treat a running application as an instance of a model, we find a total of four metalevels.3 (See the article “What Models Mean” in this issue for more discussion of these metalevels.) However, average UML users are usually unfamiliar with them. Users are primarily concerned with the visual notation—for example, rectangles depict classes, and lines between classes depict associations. So, with Kafka, developers can build model transformations based solely on the well-known visual notation. Besides the problem of implementing a model transformation, integrating it into modern iterative software development processes is a challenge. Generator tools—and a model transformation is a kind of generator—are mostly one-way tools. So, either you aren’t allowed to modify the generated model at all or a second run of the generator will overwrite all previous modifications. Instead, in iterative development, the steps of editing the PIM, transforming it to a PSM, modifying the PSM, generating source code, inserting source code manually, testing, and debugging repeat several times. Obviously, a one-way model transformation would be of little help because it would overwrite the PSM during each iteration. Therefore, a model transformation constructed with Kafka automatically preserves changes applied to the PSM during subsequent transformations. Tool support is essential to MDA. Developers need convenient MDA-aware modeling tools and model transformers tailored to their needs. Most modeling tools have a hard-coded metamodel—that is, usually UML 1.x or a derivation of it. Instead, an MDA-aware modeling tool should be able to deal with different metamodels because the PIM and PSM are usually built on different ones. Therefore, Kase can load new metamodels and notations at runtime.

Model transformers are often implemented as standalone tools loosely coupled with the modeling tool. Typically, the transformation input and output models are represented as XMI (XML Metadata Interchange) documents.4 Unfortunately, the current XMI standard does not contain diagramming information, although a new OMG standard will remove this shortcoming. If the modeling tool exports an XMI document for transformation and imports the transformed XMI document, all diagram information is lost. So, the modeling tool and transformation engine require a tighter coupling to preserve existing diagrams throughout model transformations. Kafka transformations are directly executed in Kase to overcome this problem.

The case for visual programming

Besides the problem of implementing a model transformation, integrating it into modern iterative software development processes is a challenge.

Researchers had been investigating model transformation long before the OMG started the MDA initiative. Some looked into the refactoring of models.5,6 Others tried to transform UML models to more formal models such as Petri nets to apply verification techniques.7 Furthermore, the aspect-oriented-programming community developed ideas on how to perform aspect weaving on the modeling level.8 All approaches require tools that transform, weave, or otherwise modify models. These model transformation techniques fall into three groups. The first builds on traditional programming languages to implement the transformation. Some commercial and some open source UML tools provide access to their model via object-oriented programming languages—for example, Java or Python. Others use functional languages9 or Object Constraint Language derivatives.10 These approaches’ advantage is that most build on general-purpose programming languages. A developer using such tools needs to understand just the model access API. However, a large semantic gap exists between UML as users know it (rectangles, lines, arrows, and so on) and the corresponding API. For example, a state machine has a top state that aggregates the states you can see in the diagram. If you’re not familiar with the UML metamodel, you likely won’t know about this because the top state isn’t directly visible in the diagram. Without such expert knowledge, you can’t implement a model transformation with any of the previously mentioned textual programming languages. September/October 2003

IEEE SOFTWARE

47

Rule PSM search pattern

Package Figure 1. A Kafka rule transforming a PIM (platform-independent model) component into a PSM (platformspecific model) class.

Component Package

PIM search pattern

IMPL

ComponentImpl

PSM replace pattern

Dependency

The second group builds on XML. The OMG standardized the XMI model exchange language, which is an XML application. Some commercial tools support XMI import and export—although our experience has shown that the standard compliance is sometimes questionable. It seems straightforward to transform a model by applying XSLT (Extensible Stylesheet Language Transformation) scripts on the exported XMI document and to re-import the transformed model.11 However, the serialized model (XMI) is conceptually even further from the user’s perspective of UML. So, the semantic gap is even bigger. Furthermore, XSLT is designed for transforming tree-based data structures, whereas a model is an arbitrarily shaped graph. The third group uses visual notations to describe model transformations. Such approaches are usually rule based.7 Their advantage over XSLT is that you can depict the rules as diagrams that build on the modeling language’s notation. Developers need little extra knowledge to implement and customize transformations. Thus, rule-based transformations with a visual notation can close the semantic gap between the user’s perspective of the UML and the implementation of transformations. That’s why we adopted such an approach in Kafka.

Rule-based transformations Model transformation languages are either rule-based or imperative. In our experience, rule-based languages have two main advantages. First, as we just mentioned, you can express rules using visual notation. Second, as we show later, you can obtain round-trip engineering at almost no cost. Round-trip engineering in the MDA context means that developers can switch back and forth between the PIM and PSM. In contrast, imperative approaches inher48

IEEE SOFTWARE

h t t p : / / c o m p u t e r. o r g / s o f t w a r e

ently require an imperative programming language—that is, a textual notation—and roundtrip engineering requires extra work. The challenge is deciding how to specify such rules. To get a feeling for this problem, we implemented several model transformations in Python, which serves as a general-purpose extension language for our Kase modeling tool. In practical experiments, implementing transformations this way was time consuming and error prone even though the developers knew their metamodel by heart. One major reason for mistakes was the traversal of models. Chances were high that either the transformation omitted some elements or an endless loop occurred. This chuckhole disappears with rulebased languages because the engine applying the rules performs all the model traversals. The second source of errors was misinterpretation of the metamodel. A visual notation that hides the complex metamodel behind a more userfriendly notation solves this problem. So, a rule-based transformation language with a visual notation seemed to be the optimal fit. A careful analysis of our Python-based transformers revealed that they performed basically four kinds of actions: ■ ■

■ ■

Searching for a certain pattern in the PIM Determining relevant elements of the PSM under construction—that is, a pattern search in the PSM Adding, modifying, or removing PSM elements—that is, applying a certain pattern Bookkeeping of the already-generated PSM elements

We derived Kafka from these observations.

Kafka Each Kafka rule consists of three patterns. In the example in Figure 1, the first pattern scans the PSM for an already-created UML package. The second pattern searches for a component in the PIM. The third pattern determines how to modify the PSM. In this case, the rule adds a new UML class called ComponentImpl to the package and adds a new edge labeled IMPL to the reference graph. This graph connects the PIM elements with the PSM elements—here, Component and ComponentImpl. Subsequent transformation rules can query the reference graph to find correlated PIM and PSM elements.

Transformation Interface

Component

Iface

A AImpl

Iface

AImpl()

Port AImpl

Protocol BImpl

Iface A

Iface B

Iface

A

Interaction

AProtocol

Iface

Err AImpl

BImpl

b: Iface

Iface

Call

Return

AProtocol Protocol A

AProtocol

Err

func1

Protocol Interaction X:A

Y

A

func2

Interaction X:A

func()

Y

func1() return func2()

AProtocol func error entry/ func()

Err

AProtocol func1

Generalization

Realization

The transformation engine applies a Kafka rule by first finding matches for the two search patterns. If it finds a match, the PSM replace pattern replaces the PSM search pattern. Finally, the engine builds the reference graph to perform some bookkeeping, thereby mapping PIM elements to newly created PSM elements. This process iterates until the engine can’t find any further matches for the two search patterns. The transformation engine must know in which order to apply the transformation rules. So, we use a concept similar to UML’s activity diagrams, as Figure 2 shows. We treat rules as activities, and arrows indicate the control flow.

return

State transition

func2

Dependency

The transformation in Figure 2 consists of a set of rules. The first rule (named Component) transforms PIM components into PSM classes. The Port rule transforms the PIM port concept into interface inheritance in the PSM. Furthermore, this rule uses the reference graph (via the dashed arrows) to search the PSM classes that Component previously created. The next three rules transform the PIM’s sequence diagram into a state machine. Figure 3 shows an example of input and output for this transformation. The steps executed during a Kafka transformation comply with the four steps in our Pythonbased transformations. Thus, such rules let de-

Method call Figure 2. A Kafka transformation that maps PIM components and their interaction onto PSM classes and state machines.

September/October 2003

IEEE SOFTWARE

49

PIM IShop Client IShop query() buy()

IShop Shop

Interaction C: Client

S: Shop

query() return buy() return

(a) PSM ClientImpl shop: IShop

ShopImpl

IShop query() buy()

ClientProtocol return query buy entry/ query() entry/ buy() error

error

return

Err

(b) Generalization Realization

Figure 3. An example (a) input (a PIM) and (b) output (a PSM) for the transformation in Figure 2.

State transition Dependency

Method call

velopers express transformations much as they did with an imperative programming language but with the advantages of a rule-based language. We’ve observed that transformations in the context of MDA differ tremendously from transformations used for refactoring5,6 or aspect weaving.2 In MDA, the transformation generates a model that has no immediate structural similarities with the input model, because both are built on different metamodels. MDA transformations generate a new model, whereas refactoring or aspect weaving transform the input model by applying a series of modifications. A PIM cannot be translated step by step into a PSM because the different metamodels forbid a mixture of PIM and PSM concepts in one model. So, the Kafka transformation rules leave the PIM unchanged and construct the PSM from scratch.

Round-trip engineering Ideally, the PSM that a model transformation creates contains enough information to build an executable application. In practice, the generated PSM is usually a skeleton. De50

IEEE SOFTWARE

h t t p : / / c o m p u t e r. o r g / s o f t w a r e

velopers must manually modify and extend the generated PSM and add source code. If a developer wants to switch back to the PIM after these changes to the PSM, some form of round-trip engineering is required. UML tools often use reverse engineering to implement round-trip engineering. They translate source code back into a model. Applied to MDA, this would require a reverse transformation that maps PSM concepts to PIM concepts. Unfortunately, this is wishful thinking in the general case. PIMs are built on a higher level of abstraction than PSMs. Creating a PIM from a PSM would be machine-based abstraction—a task for artificial intelligence. Although this is an interesting research topic, technology in this field is not mature enough to provide such industrial-strength reverse engineering. Even if such a reverse transformation could be implemented, it would be too difficult for the average software engineer to master. If the PIM is modified and reverse engineering is infeasible, the next model transformation might overwrite changes made to the PSM. Clearly, the earlier changes should be automatically reapplied to the new PSM. The resulting model would reflect the changes of the new PIM and those made to the previous PSM. This procedure contains two challenges. First, you must get hold of the changes applied to the PSM. You can do this by tracking the changes in the modeling tool or by comparing the previous and the new PSM. Second, you must reapply the changes to the model. You can view these changes as a model fragment that must be attached to and connected with the new PSM. For example, assume that a developer modifies the PSM in Figure 3b by adding a new attribute to the ClientImpl class. When the model transformation repeats, an algorithm in Kase must find the new incarnation of ClientImpl in the new PSM to re-add the attribute. Searching for a class named ClientImpl isn’t a good idea because the name in the PIM might have changed. Instead, the algorithm uses the reference graph to determine the new incarnation of ClientImpl. If the new PSM no longer contains an incarnation of ClientImpl, Kase cannot re-add the attribute to the new PSM, and the tool must issue a warning. With Kafka, a transformation’s developers don’t have to care much about this. They just have to make sure that the transformation builds a reference graph that connects every

About the Authors Torben Weis is a PhD student at the Berlin University of Technology. His research inter-

PSM element with its PIM counterparts. The tools hide the remaining work. Round-trip engineering involves not only the model but also diagrams. Models without nice diagrams are like obfuscated source code. They contain all relevant information, but they just don’t fit into a human’s brain. Losing all the diagrams during every development iteration is unacceptable because creating good diagrams takes considerable time. So, diagrams must be preserved throughout model transformations. Alternatively, a modeling tool could automatically “re-layout” the diagrams after every transformation. However, this isn’t the answer. First, computing optimal layouts is NP-complete. Finding good heuristics for simple graphs is difficult enough; computing well-laid-out diagrams for UML is even more difficult. Second, developers usually split large models into several diagrams, each showing semantically related elements. For an algorithm, detecting such semantic relationships is difficult. Kase tries to adapt existing diagrams to the new PSM after each model transformation. Every visual item in a diagram can check by itself whether it still has a representation in the new PSM. If not, Kase removes the item from the diagram. Then Kase scans all new model elements to determine whether the diagram requires new visual items. Thus, it automatically adapts diagrams instead of discarding them.

M

DA’s success will depend heavily on high-quality tool support. Model transformation is a crucial step that software engineers must master because, as we said before, a generic one-serves-all transformation doesn’t and won’t exist. As we’ve shown, Kafka, whose visual notation is close to that of the PIM and PSM, greatly simplifies the implementation and customization of transformations. Furthermore, its support for round-trip engineering will greatly aid the deployment of MDA-based approaches in practice. To further evaluate and improve our tools, we plan to apply Kase and Kafka to robotics. Developers will be able to model a robot’s control system in a PIM that will be mapped to a WindowsCE- or Lego Mindstorm-specific PSM. Furthermore, because current testing and debugging techniques are limited to the PSM level, we’ll investigate how results of testing and debugging can be back-annotated to the PIM.

ests are in CASE tools, model transformation, and QoS-aware component systems. He holds a master’s (Diplom-Informatiker) in computer science from the Goethe University of Frankfurt. Contact him at Sekretariat EN6, Einsteinufer 17, 10587 Berlin, Germany; [email protected].

Andreas Ulbrich is a PhD student at the Berlin University of Technology. His research interests are middleware, QoS management, and adaptive systems. He holds a master’s degree (Diplom-Informatiker) in computer science from the Chemnitz University of Technology. Contact him at Sekretariat EN6, Einsteinufer 17, 10587 Berlin, Germany; [email protected].

Kurt Geihs is a professor of distributed systems at the Berlin University of Technology. His

research interests include distributed systems, operating systems, networks, and software technology. His current projects focus on QoS management, component-based software, and middleware for mobile and ad hoc networking. He received his PhD in computer science from the Aachen University of Technology. Contact him at Sekretariat EN6, Einsteinufer 17, 10587 Berlin, Germany; [email protected].

References 1. T. Weis et al., “A UML Meta-model for Contract Aware Components,” Proc. 4th Int’l Conf. Unified Modeling Language (UML 2000), LNCS 2,185, Springer-Verlag, 2001, pp. 442–456. 2. T. Weis et al., Business Component-Based Software Engineering, Kluwer Academic, 2002, pp. 135, 150. 3. Meta-Object Facility Specification, ver. 1.4, Object Management Group, 2002; http://doc.omg.org/formal/02-04-03. 4. XML Metadata Interchange Specification, ver. 1.1, Object Management Group, 2002; http://doc.omg.org/formal/ 00-11-02. 5. M. Fowler, Refactoring, Addison-Wesley, 1999. 6. G. Sunyé et al., “Refactoring UML Models,” Proc. 4th Int’l Conf. Unified Modeling Language (UML 2000), LNCS 2,185, Springer-Verlag, 2001, pp. 134–148. 7. D. Varró, G. Varró, and A. Pataricza, “Designing the Automatic Transformation of Visual Languages,” Science of Computer Programming, vol. 44, no. 2, Aug. 2002, pp. 205–227. 8. S. Clarke and R.J. Walker, “Composition Patterns: An Approach to Designing Reusable Aspects,” Proc. 23rd Int’l Conf. Software Eng. (ICSE 2001), IEEE CS Press, 2001, pp. 5–14. 9. W. Ho, F. Pennaneac’h, and N. Plouzeau, “Umlaut: A Framework for Weaving UML-Based Aspect-Oriented Designs,” Proc. 33rd Int’l Conf. Technology of ObjectOriented Languages and Systems (TOOLS 33), IEEE CS Press, 2000, pp. 324–334. 10. J. Araújo et al., “Integration and Transformation of UML Models,” Object-Oriented Technology, ECOOP 2002 Workshop Reader, LNCS 2,548, Springer-Verlag, 2002, pp. 184–191. 11. J. Kovse and T. Haerder, “Generic XMI-Based UML Model Transformations,” Proc. 8th Int’l Conf. ObjectOriented Information Systems (OOIS 2002), LNCS 2,425, Springer-Verlag, 2002, pp. 192–198.

For more information on this or any other computing topic, please visit our Digital Library at http://computer.org/publications/dlib.

September/October 2003

IEEE SOFTWARE

51