Operational Specification and Performance Modeling for Digital ...

2 downloads 0 Views 748KB Size Report
Thanks to my committee members: John Knight, Jim French, and Jim Aylor, for their insights ...... HEY90. Heydon, A. Miro: Visual Specification of Security et al.
Integrating Operational Specification and Performance Modeling for Digital-System Design

A Dissertation Presented to the Faculty of the School of Engineering and Applied Science at the

University of Virginia

In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy (Computer Science) by Ambar Sarkar May 1995

© Copyright by Ambar Sarkar All Rights Reserved May 1995

APPROVAL SHEET This dissertation is submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Science)

Ambar Sarkar This dissertation has been read and approved by the Examining Committee:

James P. Cohoon (Dissertation Co-Advisor)

Ronald Waxman (Dissertation Co-Advisor)

John C. Knight (Committee Chairman)

James C. French

James H. Aylor

Accepted for the School of Engineering and Applied Science:

Dean Richard W. Miksad School of Engineering and Applied Science May 1995

Acknowledgments Words do not suffice to express my gratitude towards my advisors: Jim Cohoon and Ron Waxman. Thanks for your support, especially when it counted the most. Thanks for your patience, understanding and constructive criticisms. Thanks to my committee members: John Knight, Jim French, and Jim Aylor, for their insights into the subject matter and their useful suggestions. Thanks to Professor Anita Jones for temporarily serving in my committee and for her encouragement. Thanks to Samuel Sortais and Sylvain Revel, from IRESTE, France, who were such a delight to work with and who implemented several of my key ideas. Thanks are also due to Sanjay Srinivasan, who contributed initially towards the implementation of the link between Statechart and ADEPT. Thanks to the CACTUS group and the Department of Sociology for supporting me and providing me with interesting perspectives on many broad issues of Computer Science. Thanks to iLogix for providing the ExpressVHDL tool and their excellent product support. Thanks are also due to the Uninterpreted-Modeling group at the Center for Semicustom Integrated Systems at UVa for maintaining the ADEPT tool. Thanks are due to the Department of Computer Science for its constant support, encouragement, and interest in my progress. I especially wish to thank all my friends, near and far, who stood by me through both good and bad times, and were a constant source of encouragement, inspiration, and empathy. And finally, I thank my wonderful parents and my dear sister for their unconditional love, sacrifice, and understanding.

This Dissertation is Dedicated To My Parents, My Sister, and My Friends

Abstract

While evolving from an abstract concept into a detailed implementation, the design of a complex digital system proceeds through different design stages. Due to lack of effective communication of design intent among these stages, errors are introduced in the product. Early detection of such errors is crucial for increasing robustness and reducing design costs of the final product. To facilitate early detection, a design methodology must support model continuity. Model continuity comprises three subproblems: • Complementary modeling: modeling different aspects of the system under design in different modeling domains concurrently, • Back annotation:

incorporating design details obtained during later stages back into the models developed during earlier stages, and

• Conformance checking:

ensuring conformance of models across various design stages.

We address the problem of model continuity in the context of reactive systems through the integration of operational specification and performance models. Complementary modeling is supported through integrated simulation of the two models. Both models execute concurrently, exchanging data and simulation stimuli with each other. Back annotation is supported through a novel technique, called performance annotation. This technique allows the dynamic incorporation of delay-related information in an implementation-independent manner from a concurrently executing performance model. Finally, conformance checking is performed by a simulation-based algorithm. Similar to the comparison-checking technique found in the context of software design diversity, this algorithm checks the operational-specification and performance models by comparing their output sequences against each other. However, we also address situations when the output sequences can be quite different even if the models conform, and prove that the algorithm correctly determines all conformance violations, under certain design assumptions, during a simulation session. Integration of operational specification and performance models gives rise to a novel

design methodology. Starting from an operational specification, the designer proposes an implementation in an incremental and iteratively-refined manner. Using this methodology, we demonstrate how one can validate an implementation against its specification, remove ambiguities in the original specification, and obtain very early performance estimates for a system under design.

Table of Contents

Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Design stages of complex digital systems .......................................................1 1.1.1 Reactive systems................................................................................................................. 4 1.1.2 Operational specification and performance modeling........................................................ 4

1.2 Problem Definition ...........................................................................................5 1.2.1 Model continuity................................................................................................................. 5 1.2.2 Characteristics of a design methodology for reactive systems........................................... 6

1.3 Proposed Solution ............................................................................................8 1.4 Proposed Methodology ....................................................................................9 1.5 Benefits of the proposed methodology ............................................................10 1.6 Contributions ...................................................................................................11 1.7 Organization .....................................................................................................12 1.8 Summary ..........................................................................................................13

Chapter 2 Related Work. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.1 Introduction ......................................................................................................14 2.1.1 Conformance attribute ........................................................................................................ 15 2.1.2 Interaction attribute............................................................................................................. 15 2.1.3 Complexity attribute ........................................................................................................... 16

2.2 Design methodologies ......................................................................................17 2.2.1 MCSE ................................................................................................................................. 17 2.2.2 SpecCharts .......................................................................................................................... 18 2.2.3 MIDAS ............................................................................................................................... 19 2.2.4 CAD Frameworks............................................................................................................... 19 2.2.5 Performance specifications................................................................................................. 20 2.2.6 SIERA................................................................................................................................. 20 2.2.7 SARA.................................................................................................................................. 21 2.2.8 NAW................................................................................................................................... 21 2.2.9 CMU-DA ............................................................................................................................ 22 2.2.10 Ptolemy ............................................................................................................................. 22 2.2.11 Hybrid Modeling .............................................................................................................. 23

2.3 Conclusions ......................................................................................................24

2.3.1 Support for conformance attribute...................................................................................... 24 2.3.2 Support for interaction attribute.......................................................................................... 25 2.3.3 Support for complexity attribute......................................................................................... 25 2.3.4 Recommendations............................................................................................................... 27

2.4 Summary ..........................................................................................................27

Chapter 3 Research Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.1 Introduction ......................................................................................................28 3.2 Formal definition and detection of conformance .............................................29 3.3 Exchange of information during simulation ....................................................30 3.4 Detection of errors ...........................................................................................31 3.5 Implementing Integrated Simulation ...............................................................34 3.5.1 Choice of a common simulation environment.................................................................... 34 3.5.2 Exchange of information between Statecharts and ADEPT models .................................. 36 3.5.3 Linking Statecharts and ADEPT models through the VHDL ............................................ 37 3.5.4 Developing test-bench ........................................................................................................ 37 3.5.5 Identification of Statecharts partitions................................................................................ 38 3.5.6 Degree of automation ......................................................................................................... 38

3.6 Summary ..........................................................................................................39

Chapter 4 Functional Timing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.1 Introduction ......................................................................................................40 4.2 Specification of functional timings in Statecharts ...........................................42 4.3 Effects of ignoring functional timing ...............................................................44 4.4 Incorporating functional timings into specification .........................................48 4.5 Performance Annotation ..................................................................................50 4.6 Example of performance annotation ................................................................52 4.7 Rules for performance annotation ....................................................................55 4.8 Summary ..........................................................................................................57

Chapter 5 Conformance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.1 Introduction ......................................................................................................59 5.2 Definitions ........................................................................................................65 5.3 Design assumptions .........................................................................................68 5.4 Properties of rfa and ack sequences .................................................................70

5.5 Algorithm .........................................................................................................71 5.6 Proof of algorithm DetectConformance ...........................................................73 5.7 Orthogonal Sources ..........................................................................................76 5.8 Algorithm EliminateOrthogonalSources .........................................................76 5.9 Summary ..........................................................................................................81

Chapter 6 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 6.1 Introduction ......................................................................................................82 6.2 A brief overview of Token-ring .......................................................................84 6.2.1 Statecharts description of the token-ring ............................................................................ 86 6.2.2 ADEPT model of the token-ring......................................................................................... 87

6.3 Examples ..........................................................................................................88 6.3.1 Test-bench: Performance estimates from Statecharts......................................................... 90 6.3.2 Watchdog timer: Counterintuitive semantics of Statecharts .............................................. 90 6.3.3 Monitor: Incorrect component instantiation in ADEPT ..................................................... 94 6.3.4 Node protocol: Unanticipated scenario encountered.......................................................... 97 6.3.5 Protocol specification: Deviation from Statecharts semantics ........................................... 101 6.3.6 Node_protocol: Estimating queue size ............................................................................... 101

6.4 Conclusions ......................................................................................................103 6.5 Summary ..........................................................................................................103

Chapter 7 Summary, Conclusions and Future Work . . . . . . . . . . . . . . . 105 7.1 Introduction ......................................................................................................105 7.2 Research results ...............................................................................................106 7.2.1 A mechanism to incorporate functional timing into Statecharts ........................................ 108 7.2.2 A precise definition of conformance .................................................................................. 108 7.2.3 A mechanism to check for conformance ............................................................................ 108

7.3 Developed methodology ..................................................................................109 7.4 Experimental results .........................................................................................110 7.4.1 Detection of errors .............................................................................................................. 110 7.4.2 Performance estimates ........................................................................................................ 110

7.5 Future work ......................................................................................................111 7.6 Summary ..........................................................................................................112

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

Appendix A Implementation of Methodology. . . . . . . . . . . . . . . . . . . . 120 A.1 Introduction .....................................................................................................120 A.2 Performance annotation ..................................................................................122 A.3 Identifying model interfaces using black-box descriptions ............................122 A.4 Identifying model correlation ..........................................................................124 A.5 Generation of VHDL ......................................................................................124 A.5.1 Statecharts model............................................................................................................... 124 A.5.2 ADEPT model.................................................................................................................... 125 A.5.3 Link code ........................................................................................................................... 125

A.6 Simulation in Vantage .....................................................................................128

Appendix B Statecharts, ADEPT, VHDL . . . . . . . . . . . . . . . . . . . . . . . 129 B.1 Statecharts .......................................................................................................129 B.1.1 States .................................................................................................................................. 129 B.2.1 Transitions.......................................................................................................................... 131

B.3 ADEPT ............................................................................................................132 B.4 VHDL ..............................................................................................................133

Appendix C Completed Statecharts Specifications . . . . . . . . . . . . . . . . 134 Appendix D Translating ADEPT models into Statecharts. . . . . . . . . . . 162 D.1 Statecharts for Uninterpreted primitive blocks. ..............................................162

List of Figures Figure 1.1 Steps in proposed design methodology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Figure 4.1 A Statecharts without functional timings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Figure 4.2 Ambiguity in specification in the absence of functional timings . . . . . . . . . . . . . . . . . . . . . . . . 45 Figure 4.3 Unexplored simulation scenario in the absence of functional timings . . . . . . . . . . . . . . . . . . . . 47 Figure 4.4 Statecharts for a patient monitoring system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Figure 4.5 Statecharts for monitor before annotation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 Figure 4.6 Statecharts for monitor after performance annotation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Figure 4.7 Transformations for performance annotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Figure 5.1 Statecharts for monitor after performance annotation and its black-box representation . . . . . . 63 Figure 5.2 Implementation of delayedRfa(r,t) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Figure 5.3 Two stage model of VHDL process execution semantics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Figure 6.1 Token-ring configuration with five stations.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 Figure 6.2 A top-level Statecharts description of the token-ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 Figure 6.3 A top-level ADEPT description of a token-ring station . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Figure 6.4 Watchdog timer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Figure 6.5 Components of the test-bench developed in ADEPT.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Figure 6.6 Watchdog timer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 Figure 6.7 A top-level ADEPT model of the Watchdog timer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Figure 6.8 Watchdog timer: Corrected version of the Statecharts model. . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Figure 6.9 ADEPT model of monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 Figure 6.10 Statecharts model of monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Figure 6.11 Unanticipated scenario in Statecharts specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Figure 6.12 Ambiguity removed from Node_protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Figure 6.13 Statecharts model of protocol specification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Figure 6.14 ADEPT model of the correctly implemented node protocol. . . . . . . . . . . . . . . . . . . . . . . . . . 102 Figure A.1 Statecharts model for watchdog timer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Figure A.2 ADEPT model for watchdog timer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 Figure A.3 Performance annotated Statecharts of watchdog timer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Figure A.4 ADEPT model for watchdog timer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Figure A.5 Identifying the correlations between the models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Figure B.2 Statecharts representation of a simple monitor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Figure D.1 Statecharts for the protocol between a dependent output and a data input. . . . . . . . . . . . . . . . 162 Figure D.2 Statecharts for the protocol between an independent output and a control input. . . . . . . . . . . 163 Figure D.3 Generic Statecharts for the Uninterpreted Model Primitive Block . . . . . . . . . . . . . . . . . . . . . 165 Figure D.4 Statecharts for the SWITCH ADEPT model primitive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166 Figure D.5 Statecharts for the WYE primitive block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167 Figure D.6 Statecharts for the AND primitive block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

Chapter 1 Introduction

Abstract

A number of stages are involved in the design of complex design systems. Divergence among these stages is a source of error and miscommunicated design intent. In order to facilitate early detection of such errors, the design methodology must provide support for maintaining model continuity. The problem of maintaining model continuity can be decomposed into three subproblems: checking model conformance, making specification visible at all design stages, and providing back annotation of design details. We present a design methodology that supports model continuity between two important early stages in the design of reactive systems. The two stages are operational specification modeling and performance modeling. We choose Statecharts and ADEPT as the modeling environments used to develop the operational specification and the performance models respectively. The design methodology is based on the integrated simulation of a Statecharts model and an ADEPT model of any given system. The methodology supports development of the complete ADEPT model in an incremental, iterative manner. Each increment of the ADEPT model is validated against its Statecharts counterpart, and is examined in the context of the entire system under design. Finally, we identify the major contributions made in this dissertation.

1.1 Design stages of complex digital systems As the design of a complex system proceeds through the refinement of an abstract concept into a detailed design, much time and effort is spent in analyzing the evolving stages. A significant amount of such effort is invested in the form of simulation based analyses, which are applied to simulation models developed during several of these stages. Different modeling languages and environments exist for such simulation based analyses. Since the necessary expertise changes as the design unfolds with each stage, different groups of designers become involved. Good communication among these different modeling environments and groups is essential to ensure that the system requirements are kept realistic and are met by the implementations. However, most design groups and modeling environments often work only at their own levels. Communication between these levels is awkward, due to the divergence among the various simulation environments. Because of this divergence, it is extremely difficult to simulate the system incrementally as the design progresses across various stages. There is much room for introduction of error and miscommunication of design intent across the var-

1

2

ious design stages. Introduction of errors into the design cycle and miscommunication of design intent can be ascribed to three major sources. One major source is the lack of proper support for automation during early stages of design. Human intervention is often required during such stages. Such intervention becomes a potential source for many errors. The design environment should provide support for detecting and eliminating such errors. The second major source of errors and miscommunication of design intent is the fact that specifications of the product may change over time. Certain requirements of the product may not be comprehended until the product design process has been through a few stages, possibly resulting in modification of the original specification. Management of such changes across design stages should be supported by the design environments. Finally, the third major source of potential errors and design-intent miscommunication is due to technological advances that often make existing implementation technologies obsolete in a short time. In order to develop implementations that incorporate the latest technological advances, the specification should be as implementation independent as possible. Implementation independence, however, increases the number of implementation choices available, thereby increasing the design space complexity. Unless steps are taken to handle such complexity, suboptimal designs will be the norm. If the errors introduced during various stages of design are not detected and subsequently removed, they typically manifest themselves during later stages when it is significantly costlier to fix such errors. While investments in terms of designer effort in detecting these errors earlier might be higher, in the long run, development of the product will be more cost effective, as there will be fewer errors to fix later. The product will have an increased chance of being designed right relatively sooner. In addition to the detection and subsequent elimination of errors, the design environment must also support the difficult task of considering all the design alternatives systematically. Lack of such support may result in several desirable alternatives being ignored. A methodical approach for handling the design-space complexity problem will help in identifying the better alternatives sooner, thus aiding in decreasing the overall development time of a robust product. In order to reduce errors and miscommunications of design intent and to provide better

3

consideration of design alternatives, the design methodology must support increased cooperation and communication among these stages throughout the entire design cycle of a product. A review [SAR94b] of the state-of-the-art research in the field of digital system design, however, reveals that most efforts that support such increased interaction between different design stages have not been extended to early design stages. Support for increased communication between levels of abstraction higher than algorithmic level is rare. The lack of communication between the earlier and the later stages of design is due to the current immaturity of the field of digital-system design. The state-of-the-art in digitalsystem design, however, is improving rapidly to the point where approaches such as automated synthesis from relatively high-level (compared to logic level) behavioral specifications to silicon implementations are becoming increasingly feasible. Simulatable specification languages [DAV88, DEM79, HAR92, NAR91, ZAV86] have evolved and are routinely used in describing real-life applications at the end-user level. Languages such as VHDL, capable of modeling an electronic system at many levels of abstraction, have developed into industrial and academic standards for describing system design. Such standards enable a common simulation environment for various tools. Given the advances described above, it is time to advance the state-of-art in digital system design by bringing high-level specification development closer to the overall design process by increasing its interaction with later design stages. The goal of this dissertation is to advance the state-of-art in the early stages of digital system design. We focus specifically into the domain of reactive systems, which is described in next section. Our research involves early stages, namely, product specification from an end-user point of view and early performance predictions for a proposed implementation of the product. We achieve our goal by developing a design methodology that supports discovery and elimination of errors very early in the design cycle of a reactive system. The main hypothesis of this dissertation is that the verification that the system being designed satisfies the specification can be better accomplished if the models developed during specification and later design stages can be simulated together in the same environment. Sections 1.1.1 and 1.1.2 respectively describe reactive systems and two early stages in the design of reactive systems. In section 1.2 we define the problem being addressed in this dissertation. We discussed the proposed solution in section 1.3. In section 1.6 we numerate

4

the contributions made in this dissertation. We conclude this chapter with the organization of the rest of the dissertation.

1.1.1 Reactive systems A reactive system is one that is in continual interaction with its environment and executes at a pace determined by that environment. Reactive systems are best described by models based on a stimulation-response paradigm. On the occurrence of a stimulus from the environment, the model responds and then waits for further stimuli. The response occurs as a combination of a change in the system state and generation of further stimuli. The class of reactive systems includes many kinds of embedded, concurrent, and real-time systems. Examples of reactive systems include telephones, avionics systems, communication networks, human machine interfaces. One important characteristic of reactive systems is that unlike transformational systems, they cannot be specified on the basis of simple input-output relationships. In addition to the inputs, the output of the system depends on the current state of the system, which makes it relatively difficult to describe the behavior of a reactive system, as opposed to a transformational system. For a survey of several specification methodologies that exist today for reactive systems, see [DAV88, SAR94b].

1.1.2 Operational specification and performance modeling During the early stages of reactive system design, two modeling domains, namely, operational specification modeling and performance modeling, play significant roles. Operational specification modeling documents the external behavior of the system under design. The operational specification model is an implementation independent specification of the behavior, and is simulatable [SAR94b, ZAV84]. Performance modeling is used to predict the performance of a proposed implementation. A performance model abstracts the performance related features of an implementation. Qualitative and quantitative estimates of performance related characteristics are then obtained by a combination of analysis and simulation-based techniques. We propose a methodology that is based on integrated simulation of an operational specification and a performance model for a given system under design. Given an opera-

5

tional specification of the system, the methodology supports the development of a proposed implementation in an incremental, stepwise refined manner. One advantage of our methodology is that each increment of the implementation is validated by checking if the performance model of the proposed implementation conforms with its operational specification counterpart. The examination of the proposed implementation is done in the context of the overall system, as opposed to being validated in isolation. Another advantage of the methodology is that implementation dependent information, which is available during performance modeling, can now be introduced during the execution of the operational specification. Inclusion of such information may lead to the discovery of anomalies in the specification which may not be otherwise apparent. Discovery of such anomalies may result in modifications to the specification, resulting in early detection and elimination of errors in the overall design cycle.

1.2 Problem Definition During the early stages of design of any digital system of reasonable complexity, there are three major challenges that need to be addressed. The first challenge is to understand to what extent the models developed during various design stages should interact. The second challenge is to provide support for the required interaction between the models. The third challenge is to support incremental development of the system across various stages, such that even partially developed models from different design stages can interact. These challenges can be addressed by supporting model continuity throughout the design process. The remainder of this section is organized as follows. In Section 1.2.1, we define the problem of supporting model continuity. In Section 1.2.2, we discuss and finally enumerate the desired characteristics of a design methodology for reactive systems.

1.2.1 Model continuity Model continuity can be defined as the maintenance of relationships between models created in different model spaces such that the models can interact in a controlled manner and may be utilized concurrently throughout the design process. Dealing with the complexity of specification and performance models require techniques that guarantee model con-

6

tinuity. Significant effort is involved in the development and debugging of a model of the system under design. Once the model has been developed and analyzed, however, it is often discarded and is not revisited in the remainder of the design process. Such a limited useful life of a model tends to make the corresponding modeling methodology unpopular with designers. The reason for the limited usefulness of a model is due to the problem of maintaining model continuity. The problem of maintaining model continuity can be divided into following three subproblems: Checking model conformance: This subproblem involves the maintenance of conformance between models throughout the design process. This assures that the models developed during various stages of detail of system design describe the same product, i.e., conform to the end-user specification and do not contradict each other. Making specification visible at all stages of design: The projection of high-level rationale into detailed design making is an important concern. Decisions made at lower stages, while apparently suitable at such level, may violate the goals at a higher, more abstract level. The designer at a lower level must conform to the higher-level constraints. Providing back annotation of design details: Reflection of lower-level details back into higher-level consideration is important. Decisions made at lower stages may affect decisions at higher stages of design, possibly requiring changes to the original specifications in the limit and affecting other components of the system being developed.

Support for model continuity is the responsibility of the design methodology. In the next section, we identify the specific characteristics of a design methodology that provides model continuity across the design stages of operational specification and performance modeling.

1.2.2 Characteristics of a design methodology for reactive systems In this section, we discuss the challenges of supporting model continuity in the context

7

of a design methodology that encompasses operational specification modeling and performance modeling, both being early stages of reactive system design. In order to look for desirable characteristics of the methodology, we look at a typical design scenario. The operational specification is first developed. Once it has been developed and analyzed, an implementation of the specification is proposed. A performance model is next developed to predict the performance of the implementation. There is usually no support for checking the conformance between the operational specification and the performance model. In order to control complexity, the performance model may be built in an incremental manner. However, in existing design methodologies, each new increment is developed and analyzed in isolation, rather than in the context of the rest of the system under design, which may result in certain system oriented design errors being left undetected until later stages. Thus, the methodology must provide support for checking conformance between the operational specification and the performance model developed. In addition, it should support incremental development of the performance model in conjunction with the operational specification. The problem of checking conformance between the operational specification and performance model is difficult due to the nature of the reactive systems and the need for maintaining implementation independence of the specification. Reactive systems are typically characterized by high degree of concurrency, complex event and time driven behavior, and asynchronicity. These properties make the corresponding models complex to develop and analyze. Since the operational specifications are developed with the end user in mind, they are kept implementation independent as far as possible. On the other hand, the performance models reflect the implementation to some extent, and hence are implementation dependent. As a result, maintaining conformance between an operational specification model and a performance model is a difficult challenge, since the two model types describe different aspects of the system under design. Specification modeling is typically performed in isolation from later design stages such as performance modeling. However, many specification errors, ambiguities or inconsistencies are discovered during the later stages. Many of these errors are due to miscommunicated design intent and lack of anticipation of certain design scenarios during specification stage. The challenge lies in creating a design methodology that provides support for detec-

8

tion of such situations and allowing incorporation of the needed corrections into the specification in a seamless manner. We believe the challenge of model continuity during the early stages of reactive system design can only be handled by a methodology that possesses the following features: • Supports both operational specification and performance modeling in an integrated fashion. By integration, we mean the ability to simulate and analyze simultaneously across different model spaces. • Supports the detection of nonconformance between an operational specification and a performance model. • Supports transfer of information, in both directions, between the operational specification and performance model during simulation. • Maintains implementation independence of the operational specification while incorporating implementation dependent information into its simulation.

1.3 Proposed Solution We propose a design methodology that addresses the problem of maintaining model continuity for the design of complex reactive systems, specifically during early design stages, namely operational specification and performance modeling. We use the modeling environment based on Statecharts [HAR88, HAR87a, HUI91, HUI88, ILO92] for developing operational specifications. The language of Statecharts is based on the Finite State Machine formalism, and is suitable for describing behavior of reactive systems. For more details on Statecharts, see Appendix B. We use the ADEPT [AYL92, RAO90, SRI90] performance-modeling environment. ADEPT is a simulation based environment, utilizing the VHDL [IEE88] language and has an underlying formalism based on Colored Petri Net [PET81, SWA92]. It offers designers the ability to model the information flow of a proposed implementation, at a high level of abstraction. For more details on ADEPT and VHDL, see Appendix B. The proposed design methodology provides an integrated design environment where

9

the designer starts with a Statecharts description of the system under design. Based on the Statecharts specification, a complete performance model is developed in an incremental manner. Each new increment of the performance model is validated for conformance to its Statecharts counterpart and is analyzed in the context of rest of the system. The steps of the methodology are given in Figure 1.1

1.4 Proposed Methodology Create an executable specification model of the SUD(sudspec) using Statecharts Create a test-bench to drive the specification model, and, obtain performance data using ADEPT Integrate sudspec with the test-bench. The integrated model(IM) has both Statecharts and ADEPT components. Repeat Select a component of the Statecharts model(spec). Propose an implementation for spec. Create a performance model(pm) in ADEPT for the proposed implementation. In IM, replace spec with pm Repeat Simulate IM. Check whether the behavior of pm conforms with spec. If not, modify spec or pm as needed Until pm conforms to spec Obtain performance related data, reflecting the incorporation of pm Until all Statecharts components have been selected Connect all conformed pm’s created so far. This connected model is a conformed performance model for SUD.

Figure 1.1 Steps in proposed design methodology

As a first step in our methodology, an executable specification model of the system under design is created using the language of Statecharts. We refer to this Statecharts model as sudspec. Next, a test-bench is developed, to emulate the environment in which the system is expected to operate. This test-bench can be modelled in either Statecharts or ADEPT. We choose to model the environment in ADEPT, which is a performance modeling environment and we are interested in obtaining performance predictions for the system. The next step is to integrate the test-bench with sudspec so that both models can be simulated concurrently, with outputs from test-bench driving the sudspec and vice versa. Such

10

a simulation session models the operational scenario where the system under design interacts with its environment. We will refer to the integrated model as IM. IM is simulated in order to obtain some preliminary performance numbers. The primary aim of this step is to make sure that the sudspec behaves consistently under operational scenarios generated by the test-bench. Since the system under design may be quite complex, it is more reasonable to develop the entire ADEPT model in an incremental fashion. The sudspec is therefore partitioned into several components. The designer’s goal is now to replace the Statecharts components in the IM with their corresponding ADEPT components one at a time. The replacement is done in such a way that instead of the Statecharts component (spec component), it is the ADEPT model (pm component) that communicates its outputs to the rest of the world. Both spec component and pm component are simulated in a synchronized manner, with both models getting the same inputs. If the outputs produced by spec do not conform to the outputs produced by pm, an error is declared. The error may be due to a combination of erroneous implementation by the designer or ambiguous specifications by the specifier. In case of an error, the pm and the spec components are modified as needed, and the debugging through synchronized simulation continues until the outputs of the two models conform. Once all spec components of the sudspec are accounted for by their corresponding pm components, all the pm components are connected together to make a complete performance model for the sudspec. Once totally in the ADEPT environment, the design of the system may continue via ADEPT’s integrated hybrid modeling techniques providing model continuity from the performance level to the behavioral level of design.

1.5 Benefits of the proposed methodology The methodology supports the designer in the development of a complete performance model for the system under design one piece at a time, in a stepwise refined manner. Each performance model component is tested first for conformance to its specificating and then in the context of rest of the system under design, instead of being tested in isolation. As a result, the complexity of the design process is kept tractable. The Statecharts model is used to check the validity of the performance model. In addi-

11

tion, information from the implementation dependent performance model is communicated to the Statecharts level, without making any implementation dependent changes to the Statecharts specification. As a result, the goal of model continuity is satisfied by an increased interaction between models belonging to different design stages. The proposed methodology thus enables the designer to understand the relationship between the models and to control the interaction between them. At the heart of this methodology is what can be called complementary modeling. Complementary modeling is the act of modeling different aspects of the system under design in different modeling environments. By describing parts of the system in Statecharts and the remaining parts in ADEPT, we perform complementary modeling. By synchronizing the execution of the Statecharts and the performance models, we introduce timing details into the execution of Statecharts model that were not available during specification. The assimilation of such lower level timing details is achieved without making any implementation dependent changes in the Statecharts model. The ability to allow an implementation independent operational specification allows the design space to be less constrained and allows the designer to experiment with various alternative implementations. The Statecharts model plays an integral role of the development and testing of the complete performance model, due to complementary modeling and conformance checking across these different modeling environments. All these factors make it possible for the Statecharts model to be continually utilized throughout the design process. Since each increment of the performance model is simulated with the rest of the system under design, performance predictions can be obtained in the context of entire system. Thus, as an added benefit, one can obtain performance estimates prior to developing a performance model for the entire implementation. Another anticipated benefit will accrue to the designer at the completion of the performance to behavioral level of design. At this level of detail, the performance information that becomes available may be fed back to the specification level for further validation of the design versus the specification. This level of experimentation has not been accomplished at this time.

1.6 Contributions

12

There are three major contributions of this dissertation, namely: providing conformance checking, enabling back annotation, and allowing incremental design elaboration. Overall, these contributions lead to early detection of design errors, and a significant increase in the degree of communication between the specification stage and later design stages. These effects directly contribute to reduced design time, with better and more robust designs. The first contribution is the development of a simulation-based technique that helps to check for conformance between the specification and implementation of a given system under design. The checking is done with a minimal investment of designer effort. This dissertation is the first to study, develop and demonstrate the feasibility of such an approach. The second contribution is the incorporation of implementation dependent information into the simulation of an executable specification model. By incorporating implementation dependent information, the simulation of the operational specification will generate more realistic scenarios than otherwise possible. We anticipate that realistic simulation scenarios will result in detection of certain design errors that would have been otherwise overlooked until later design stages. The third contribution is the simulation based analysis of a partial implementation in the context of the rest of the system under design. By enabling analysis in the context of the complementary part of the system, the designer is able to study the suitability of the implementation to a greater degree as compared to it being studied only in isolation. An interesting by-product of this contribution is the ability to predict the overall performance characteristics of the system even if its performance model has been only partially developed.

1.7 Organization The rest of the dissertation is organized as follows. In Chapter 2, we discuss related work. In Chapter 3, we discuss research issues related to the integration of Statecharts and ADEPT models. Two such issues: Functional Timing and Conformance are discussed in detail in Chapters 4 and 5 respectively. In Chapter 6, we present a rich set of examples that demonstrate the effectiveness, feasibility, and practicality of our methodology. We con-

13

clude in Chapter 7, where we also present some interesting extensions of our work.

1.8 Summary We present a methodology that allows the designer to develop both specification and the performance model in a stepwise, iteratively refined manner. Complementary modeling is achieved by modeling different system aspects in different modeling domains. A robust specification and performance model is obtained by virtue of conformance checking where the external behavior of the specification model and the performance model for the same system under design are compared against each other for validation.

Chapter 2 Related Work

Abstract

In this chapter, we consider several methodologies that address the early stages of reactive system design. We identify three attributes of a methodology: conformance attribute; interaction attribute; and complexity attribute. In order to support the problem of model continuity, a methodology must effectively support each of these attributes. We evaluate several existing methodologies by identifying their support for each of these attributes. We summarize the results of our evaluations and recommend improvements to existing design methodologies for the effective support of model continuity.

2.1 Introduction We first investigate the status of research addressing the three subproblems of model continuity. We consider especially those design methodologies that involve the early stages of digital system design and involve two or more design stages. In order to establish a general framework for evaluating a design methodology, we broadly identify two phases of design, namely: the specification phase and the implementation phase. In the specification phase, the designer formulates and states the requirements of the desired end product in a formal manner (e.g., operational specification), so that the requirements can be analyzed algorithmically. In the implementation phase, the designer proposes a partial or a complete architectural implementation of the given specification. Both the specification and the implementation phases can individually contain a number of distinct design stages. Typically, several models are developed during these stages. We call a model developed during the specification phase a specification model and a model developed during the implementation phase an implementation model. Once the specification and implementation phases of a methodology are discussed, we evaluate how each methodology maintains model continuity as the design evolves. Specifically, we identify the attributes of a methodology that assist in solving the three subprob-

14

15

lems of model continuity [Section 1.2 (Problem Definition)]: checking model conformance, making specification visible at all stages of design, and providing back annotation of design details. We call the identified attributes conformance attribute, model attribute, and complexity attribute respectively.

2.1.1 Conformance attribute The conformance attribute identifies how a methodology addresses the first subproblem of model continuity: checking conformance among models developed. The methodology should provide either a simulation-based support or an analysis-based support for checking conformance between the models. By checking whether two models, developed during different design stages and environments, conform to each other, one can eliminate many design mistakes early in the product design cycle. We categorize conformance checking into two dimensions: vertical and horizontal. Vertical-conformance checking involves validating conformance between models representing different levels of abstraction. For example, checking conformance between a register-transfer-level behavioral model and a logic-level behavioral model requires verticalconformance checking. Horizontal-conformance checking involves validating conformance between models representing different modeling domains. The checking of conformance between a performance model a behavioral-model is an example of horizontalconformance checking. A methodology must provide a set of mechanisms that check for both vertical-conformance and horizontal-conformance among models.

2.1.2 Interaction attribute The interaction attribute identifies how a methodology addresses the second and third subproblems of model continuity: maintaining visibility of the specification model during the implementation phase and incorporating relevant details obtained from the implementation phase back into the specification model. Supporting such a high degree of interaction and information flow among these models requires integrated modeling across different levels of abstraction and modeling domains. By integrated modeling, we imply that the flow of information occurs in both directions across the model boundaries. This flow of information can occur during either integrated simulation or integrated analysis of both

16

models. A methodology must support bidirectional information flow across model boundaries. Analogous to conformance checking, we categorize model interaction along two dimensions: vertical and horizontal. Vertical interaction occurs between models belonging to different levels of abstractions, whereas horizontal interaction occurs between models belonging to different domains of modeling. A methodology must provide mechanisms that support both vertical and horizontal model interactions.

2.1.3 Complexity attribute The complexity attribute addresses the problem of controlling complexity during the development and analysis of models throughout the design stages. Given the considerable complexity of models representing nontrivial systems, support for this attribute is necessary for the effective implementation of the conformance and interaction attributes. Complexity control is primarily achieved by supporting a hierarchy of representations. Support of hierarchy significantly reduces design time, as the designer is allowed to provide less detail in creating the representation. For adding or synthesizing further information, he or she can then use automated or semi-automated design aids. In addition, a hierarchical approach allows the designer to quickly identify what portion of the design should be expanded upon, without necessarily expanding the rest of the system. This incrementalexpansion approach is of tremendous advantage when the expanded representation is radically different from the original representation. By enabling incremental modifications, a hierarchical representation can improve the designer’s comprehension of the effect of change on the original model. Similar to conformance checking and model interaction, model complexity can also be divided into two dimensions: vertical and horizontal. Abstraction of a lower-level model into a higher-level is an example of managing vertical complexity. Combination of models from different modeling domains into a unified representation is an example of managing horizontal complexity. The hierarchical representation must include both horizontal and vertical complexity. In addition to a hierarchical representation, a methodology should also support conceptualization of the entire system at a very high level of abstraction. As the design evolves,

17

the methodology should enable the designer to conceptualize the system through a number of levels of abstraction and domains of modeling, starting from the conceptualization of the system and carrying the design process down to its implementation. Since many design details are not anticipated or available during early stages, modeling with incomplete information should also be allowed. The ability to perform complementary modeling, where different aspects of the system are modeled in different modeling domains should also be supported. The organization of the rest of the chapter is as follows. Section 2.2 provides a survey of existing design methodologies that can support model continuity during the stages of reactive-system design. We conclude in Section 2.3 enumerating concerns regarding model continuity that have not been addressed satisfactorily in current research, and identify our research in terms of these attributes.

2.2 Design methodologies In the following sections, we examine a number of existing design-methodologies that address the problems of model continuity. We describe the salient features of each methodology and discuss its effectiveness in supporting the three attributes: conformance, interaction, and complexity.

2.2.1 MCSE In the specification phase of MCSE [CAL93], three specification models are developed: functional, operational, and technological specifications. The implementation phase follows next, consisting of a functional-design step, implementation-specification step, and an implementation-realization step. In the functional-design step, the functional specification is decomposed into a structural description in a top-down manner. The implementationspecification step follows next. Its primary purpose is to allocate the resources and do some preliminary-performance analyses. Finally, the implementation specification is realized in hardware and software, while keeping in mind the technological specifications. The implementation is manually validated against its specification. A bottom-up implementation process and a top-down design process is normally used.

18

The conformance and model attributes are barely supported in this methodology, as no direct support exists for checking conformance or exchanging information between models. There is no support for integrated simulation or complementary-modeling. Complexity-control is addressed by supporting only vertical hierarchical representation schemes for specification.

2.2.2 SpecCharts Gajski et al. [GAJ94, GAJ92]use the SpecCharts language to capture behavioral specifications of entire systems. The end result of the specification phase is an executable specification developed using the language of SpecCharts. In the implementation phase, behavioral synthesis is used. The system behavior, described as a SpecCharts model, is partitioned among a set of physical system components. Behavioral partitioning [VG92]is used to satisfy chip-capacity constraints while considering system-performance constraints. Finally, using a technique known as specification refinement is used to modify the specification to reflect the transformation of the functional components into physical-system components. The conformance attribute is addressed due to a synthesis based approach, since the implementation is algorithmically derived from the specification. This implies that as long as each of the synthesis steps preserves the design intent of the source stage, the transformed stage will follow the same design intent. Thus vertical conformance checking is supported implicitly due to synthesis. Horizontal conformance checking is not supported since one cannot check for conformance of non-synthesized implementations with their specifications. Since this synthesis approach is based on specification partitioning, visibility of the specification is maintained in the implementation by default. Using specification refinement technique, the effect of implementation choices is reflected back to the specification. Specification refinement causes implementation-dependent changes to the original specification, and may obscure the original specification under these changes, thus insufficiently supporting the complexity attribute. In addition, the synthesis approach requires the specification to be complete enough so that it can be synthesized. This may not be the case in early stages, when design is still evolving. There is no support to incorporate external

19

implementations and check for their conformance. Neither is the problem of verifying the structural implementation against an end-user description addressed. In summary, there is no support for integrated simulation, or complementary modeling.

2.2.3 MIDAS In MIDAS [BAG91], an architectural solution is represented in terms of a simulation model. During the specification stage, a discrete-event simulation model is developed. This model is a performance model, also called a hybrid model due to its partially implemented status. A complete operational system is incrementally constructed by replacing parts of the simulation model by operational subsystems. The resulting model can be simulated with both simulational and operational subsystems being concurrently active. This clearly supports the goal of integrated simulation. A separate performance model does not need to be designed and maintained for the system. The accuracy of the system increases as more and more parts are converted. While this approach encourages the notions of iteration and modular development during the design process, the level of design is at a much lower level than the end-user specification of the system. As a result, the complexity attribute is not fully supported in the sense that it is not possible to model at high levels of abstraction and incomplete design details. In fact, the initial specification is actually a proposed implementation, far removed from what an end-user typically has in mind. Further, there seems to be only two levels of abstractions that can coexist: the simulation model and the operational model. The conformance attribute is also not supported.

2.2.4 CAD Frameworks CAD frameworks [SRI91, JAC92, BEG92, SCH93] are geared towards providing support for inter-operability among various modeling tools. A few frameworks also provide some higher-level facilities to help designers manage the design processes, especially in the form of executing static design flows and capture of design history. The key issues that are addressed in developing a framework are [SCH89]: control integration, which determines a tool’s ability to communicate with other tools, data integration, which determines the degree to which data generated by one tool can be used by the other, and user-interface inte-

20

gration, where different tools try supporting similar interfaces. Such an approach supports the conformance and interaction attributes to some extent by virtue of the inter-operability between the various tools. While there is exchange of data and control between models, there is neither much support for integrated simulation of the models, nor any support for validation of conformance. Since different models are typically developed by different tools, CAD Frameworks helps support the goal of providing model continuity by increasing the cooperation among the tools. However, just increasing the interaction between tools does not necessarily lead to an increased interaction between the models. Identification of the nature of interaction between models needs to be addressed.

2.2.5 Performance specifications In a predominantly software-oriented approach, Opdahl [OPD92] adds annotations to the specification of the desired product. These annotations are estimates of resource usage by the corresponding (annotated) subsystem. Performance estimations for the overall system are then obtained based on these annotations. Such an approach biases the performance estimation to the specification, rather than a proposed implementation. The conformance attribute is satisfied in a limited sense, since the performance model developed is synthesized from the annotations. Interaction and complexity attributes are marginally satisfied in this approach. Though multiple levels of abstractions are able to coexist in the same model, horizontal-interaction attribute is not supported. The performance model synthesizes from the specification, rather than a proposed implementation. As a result, incorporation of data from lower levels is not supported.

2.2.6 SIERA SIERA [SUN91] is a CAD environment for rapid prototyping of dedicated systems involving both hardware and software components and using both custom and off-shelf parts. During the specification phase, the system behavior is specified as a static network of communicating processes, using VHDL to support the simulation of the specification. In the implementation stage, the specification is then manually mapped to an already existing hardware/software template. The behavioral specification of the system is refined into a specification as a set of interconnected hardware processing modules, which may be pro-

21

grammable. The hardware processing modules implement various subsystems. The end result of the implementation phase is a complete netlist of hardware components that realize the system. This result is achieved through a series of partitioning, mapping, and synthesis – which is supported through libraries and templates. While the approach is pragmatic, the design space is too constrained due to early binding to an architectural template. Such an approach is useful in only those application domains where it is possible to come up with architectural templates. Since no checking for conformance between specification and implementation is provided, conformance attribute is unsupported. Vertical model interaction is not supported, due to lack of representation of multiple levels of abstractions. Flow of information is allowed in only one direction; from the behavioral specification model into the implementation. The complexity attribute is supported by hierarchical representations which may represent many combinations of levels and domains.

2.2.7 SARA System ARchitect’s Apprentice (SARA) [LOR91] uses a knowledge-based synthesis tool for the derivation of a design from a specification. The specification phase consists of developing System-Verification Diagrams (SVD) and Data-Flow Diagrams (DFD), while the implementation phase consists of generating behavioral and structural models of the system through synthesis. The synthesis step helps the system designer transform requirements stated in one particular set of requirements language into a design stated in one particular set of design languages. The generated models are then combined by code developed by the human designer and simulated. In this approach, lower-level concerns are not conveniently reflected back to the original specification, given the top down nature of the design process. The conformance attribute is supported at the cost of complexity attribute. This is common in most synthesis approaches. The interaction attribute is also not supported.

2.2.8 NAW Woo et al. [WOO92] compile a single specification into a hardware/software implementation. Their approach relies on the specification itself being a good starting point for

22

the generation of a good design. The specification typically results in a single structural partitioning, where the composite elements may be implemented by either software or hardware. The implementation phase allows gradual and continuous repartitioning of the specification. During early design stages, the designer may start with an all-software implementation, and may refine portions of it into hardware over time. Object-oriented functional specifications are used for co-design objects and specifications. The authors claim that their specification and partitioning technique is totally implementation independent regarding hardware-software partitioning decisions. Their approach allows evaluation of alternate partitions. Conformance-checking attribute is not supported by this methodology. The interaction attribute is supported only horizontally, by allowing interaction between hardware and software. The complexity attribute is supported: both hardware and software modules can exist at different levels of abstraction.

2.2.9 CMU-DA In the CMU-DA [BLA85] system, the designers realize that the behavioral and structural views of a system can be quite different. The specification phase consists of describing the system using a register-transfer level language. In the implementation phase, an augmented-data-flow description representation of the specification is generated. Next, also during the implementation phase, the data-flow description is topologically transformed for implementation optimization. The CMU-DA system attempts to link the two design views, specifically, behavioral and structural. Information is automatically added to the design representations to ensure that correspondence between the elements of the models is not lost. This is one of the few methodologies that support interaction attribute in both directions. It also allows one to switch levels of simulation, analogous to switching between source level and assembly level when debugging a software program. While this approach is a step towards multilevel analysis and simulation, the levels considered are not higher than the algorithmic level. Explicit support for checking conformance between the behavioral and structural models is not provided, thus not fully supporting conformance attribute.

2.2.10 Ptolemy

23

Ptolemy [BHL94] is a flexible and extensible platform for simulation, rapid-prototyping, and related design environments. Two or more simulation environments built on Ptolemy can be combined into a single environment, thus enabling combination of different computational models. Ptolemy uses object-oriented techniques, with a stress on polymorphism. One important aspect is the synchronization between timed and untimed models, so that a virtual global clock is maintained for all models. However, the environment does not solve the problem of incorporating timing information from one domain to the other, especially in cases where one timed model has more accurate timing information compared to the other timed model. The methodology supports integrated simulation, with information travelling vertically and horizontally between the two models along both directions. However, the conformance attribute is not supported, either vertically or horizontally, due to lack of any conformance checking mechanism.

2.2.11 Hybrid Modeling The hybrid-modeling environment at UVa [AYL92] allows systems to be designed in a hierarchical fashion, from its conceptualization to its implementation. In the early stages of the design process, when the structure, architecture, and the basic design goals of the system are being developed, uninterpreted modeling is used, since the design is still independent of the exact functionality of individual components. In an uninterpreted model, the individual components, also called modules, contain no data values or data value transformations. The uninterpreted modules provide delay and other performance-related information which is instrumental in obtaining performance metrics for the modeled system. The uninterpreted modeling stage can be considered as the specification phase, where the developed model is the specification of the information flow to be implemented by the actual hardware. The implementation phase consists of systematically and incrementally converting the uninterpreted model into a fully interpreted description as the design unfolds. The conversion is achieved by developing the functions of individual components or selecting fully-interpreted components from existing libraries. During the conversion stage, an uninterpreted module is replaced by a hybrid module. The latter module contains both uninterpreted and interpreted counterparts for the same module, both of which are simulated concurrently. The conformance attribute is not sup-

24

ported. However, the interaction attribute is also strongly supported, and so is the complexity attribute. Uninterpreted counterpart forces the interpreted model to be visible to the original uninterpreted specification and vice-versa. The performance information obtained from the interpreted module is fed back to its Uninterpreted counterpart, thus dictating the performance characteristic of the uninterpreted model. This feedback is an important contribution of this methodology. The specification-abstraction level is not at the level or domain of high-level behavioral specification, especially for reactive systems. Model interaction between a specification and structural model of a reactive systems has not been addressed in this approach. In addition, the mechanism developed to synchronize time between the uninterpreted and interpreted model is not sufficient to address communication of timing information from the functional model into a high-level behavioral specification. This insufficiency is discussed in Chapter 3.

2.3 Conclusions From our discussion of the three attributes of a design methodology– conformance, interaction, and complexity, we derive two major conclusions. First, no single methodology currently addresses all three attributes effectively. Second, several improvements can be made regarding the state-of-the-art in effectively supporting each attribute.

2.3.1 Support for conformance attribute We observe that no methodology explicitly supports the conformance attribute. None of the methodologies support explicit conformance checking, whether vertical or horizontal. By explicit support we mean an automated or semi-automated approach to check for conformance between models. In order to aid in checking conformance by manual inspection, some methodologies support integrated simulation. While vertical-conformance checking is supported by synthesis-based approaches, horizontal-conformance checking is unsupported. There is also a lack of analytical approaches for conformance checking at high level of abstractions and for non-trivially complex systems. Conformance checking has been studied in the context of design diversity [Avi82,

25

BKA90, RMB81, SE86, VHT86], where multiple versions of the same component is created by different design teams. The main idea in this approach is that different versions should contain different design mistakes due to the independence of the design teams from one another. By comparing the outputs of these different versions, one can detect a design mistake whenever the outputs mismatch. This approach of checking conformance is also known as comparison checking, back-to-back testing, and automatic testing. However, most comparison checking approaches require the versions to be at the same level of abstraction and to cover the same modeling domains. Therefore, such approaches lack support for the vertical and horizontal conformance attributes. Further, most comparison checking approaches require the versions to produce outputs in exactly the same order. Such a requirement can be overly restrictive, as the specification may accommodate different possible ordering among the outputs. In such cases, one should allow the outputs to differ in the sequence in which they are generated.

2.3.2 Support for interaction attribute The interaction attribute is not effectively supported by most methodologies. Flow of information typically occurs only in one direction across model boundaries; from highlevel into low-level of abstraction. A few cases, such as MIDAS and Hybrid Modeling methodologies, support bidirectional flow. Reflection of lower level details is supported in SpecCharts, however, at the cost of lowering the level of abstraction of the specification model. In some cases where integrated simulation is supported, ( e.g. Ptolemy), the focus of the methodology is restricted to implementing clock synchronization. Executing one model while incorporating potentially more accurate delay information from another model is not supported, except in the case of Hybrid Modeling approach. However, we find that the Hybrid-Modeling approach is not sufficient to handle reactive systems, and at high level of abstractions. Integrated simulation is rarely supported, and integrated analysis is not supported at all. The lack of any support for integrated analysis is likely because analysis of nontrivial systems across levels of abstractions is very complicated, making analysis across different model domains even more difficult.

2.3.3 Support for complexity attribute

26

We observed that most methodologies supported vertical representation with different abstraction levels co-existing in the same representation. Some methodologies support horizontal representations. By allowing complementary modeling across modeling domains. However, both horizontal and vertical representations are not found in most methodologies. Further, many methodologies do not extend their support to higher levels of abstractions. It is much more easy to comprehend high-level behavior in terms of low-level behavior, but the reverse problem, i.e., comprehending lower-level structure in light of higher-level behavior, is not generally supported. A number of the methodologies studied are based on the synthesis technique [GV80, LOR91], and address the conformance attribute partially by providing vertical-conformance checking. We now summarize our observations on synthesis-based approaches in view of the three attributes mentioned. Synthesis based approaches For synthesis-based methodologies, implicit support occurs in the form of vertical-conformance checking. A synthesis step typically takes a relatively higher-level abstraction of a system and transforms it into a relatively lower-level description of the system by adding details that were not of interest to the designer at the higher-level. The end result of a synthesis process is typically a structural description consisting of a set of interconnected lower-level components, often called a netlist. If each synthesis step preserves the functionality of its source description, and uses verified components to generate the resulting description, the resulting netlist will conform to the source description. Thus, vertical checking of conformance, i.e., checking between a higher-level and lower-level model is implicitly supported. Vertical-conformance checking is a major advantage of synthesisbased approaches. However, there are several disadvantages to a synthesis-based approach. First, the components are typically not verified for correctness, and are often obtained from a third-party vendor. Lack of correctness of individual components can potentially result in nonconformance of the synthesized product. Second, the synthesized model is typically at a muchlower level of abstraction. A lower level of abstraction makes it relatively more difficult to comprehend the model behavior, and consequently, makes it harder to locate and eliminate

27

errors. Third, high-level descriptions can lack sufficient information to perform effective synthesis, resulting in poor implementations. Finally, synthesis typically requires early binding decisions of its implementation choices, resulting in a limited design space, and therefore decreased choice for effective implementations.

2.3.4 Recommendations On the basis of our analysis of various methodologies against our defined attributes, we conclude that no single methodology effectively supports all of the three attributes. The notion that the specification is an evolving design stage, and may be modified due to design decisions made later has rarely been supported in most design methodologies. Based on our observations, we recommend the features that a methodology today should support in addition to its existing attributes: •

Conformance checking supporting both vertical and horizontal dimensions



Integrated simulation supporting bidirectional flow across models



Hierarchical representation supporting both vertical and horizontal dimensions



Provision for higher levels of abstractions

The methodology presented in this dissertation incorporates these features.

2.4 Summary We evaluated several design methodologies applicable to early stages of digital-system design. The concept that the specification is a design stage that can change as design unfolds has not been effectively supported by existing methodologies. We identify the specific attributes that has not been supported by methodologies. By incorporating these attributes, a methodology will effectively address the problem of supporting model continuity.

Chapter 3 Research Issues

Abstract

In order to effectively address the problem of maintaining model continuity, a methodology must strongly support three attributes: conformance, interaction, and complexity. To support these attributes in the context of operational specification and performance modeling design stages, several issues are addressed. The issues include definition and validation of conformance, exchange of information during integrated simulation, and detection of errors. Implementation concerns arising from the integration of a Statecharts model and an ADEPT model are also addressed.

3.1 Introduction In order to effectively address the problem of maintaining model continuity, a methodology must strongly support three attributes: conformance, interaction, and complexity. The methodology proposed in Section 1.3 is based on an integrated simulation of models developed during two different design stages: operational-specification modeling and performance modeling. These models are respectively developed in the Statecharts and ADEPT modeling environments. Given the divergent focus of the Statecharts and ADEPT environments, supporting the three attributes becomes a nontrivial problem. In this chapter, we discuss the issues that need to be addressed so that the three attributes are effectively supported by the proposed methodology. Supporting the conformance attribute implies that a precise definition of conformance is needed. Also needed is a mechanism that can detect violation of conformance between the two models. Supporting the interaction attribute implies that the types of interaction that can occur between the two models must be identified. Errors whose detection is made easier due to the enhanced interaction between the two models should be categorized, so that their possible sources can be identified. Supporting the complexity attribute implies that both complementary modeling and hierarchical decomposition of the problem should be allowed. Complementary model-

28

29

ing implies that some parts of the system under design are modeled using ADEPT whereas the remaining parts are modeled using Statecharts. Hierarchical decomposition implies that the designer is allowed to partition a larger design problem into a set of smaller subproblems, so that the complete design can be developed in a systematic and incremental manner. The rest of this chapter discusses the above-mentioned issues as follows. Section 3.2 discusses the need for a precise definition and a validating mechanism for conformance between a operational specification and a performance model. Section 3.3 discusses the types of information that need to be exchanged between the two models. Section 3.4 discusses the types of errors we expect to uncover using our methodology. Finally, Section 3.5 discusses various implementation issues that arise when integrating a Statecharts and an ADEPT model.

3.2 Formal definition and detection of conformance Two models are said to conform to each other if they describe the same system. In order to check whether an operational specification and a performance model conform to each other, a precise definition of conformance is required. Defining conformance precisely is a nontrivial task. Operational specification and the performance model are divergent, being developed in different modeling environments, describing different aspects of the system under design, and having different degrees of implementation detail available. In order to validate the conformance of the models during simulation, one has to make sure that the simulation activities occurring in the models correspond, that is, both models should predict analogous behaviors for the system under design. Due to the divergence among these models, correspondences between activities in the two models are not apparent, making it difficult to recognize when the two models are behaving analogously. We also need to develop a mechanism to validate the conformance of the two models. Such a mechanism should monitor the execution of both models and raise an error flag whenever a violation of conformance is encountered. In order to develop such an algorithm to detect conformance, some design assumptions may be necessary. These design assumptions should be realistic. Further, the correctness and the completeness of the algorithm under these design assumptions must be proved. These issues and our solutions are covered

30

in detail in Chapter 5.

3.3 Exchange of information during simulation In order to comprehend and implement interactions between an operational specification model and a performance model, identification of the type of information that has to be exchanged between the models during an integrated-simulation session is needed. We have identified two kinds of information exchange that can occur between the two models. The first kind of information exchange occurs in the form of translation of stimuli from one model space to the other. In this dissertation, we address only those models which are based on the stimulus-response paradigm. In this paradigm, a model waits for relevant external or internal stimuli to occur. When the relevant stimuli do occur, the model responds by changing its state and/or by generating further stimuli. This cycle of stimulusand-response is repeated until the end of the simulation session. Thus, among models based on stimulus-response paradigm, the flow of information can occur in the form of exchange of stimuli. A stimulus generated during the execution of the operational-specification model can therefore act as a stimulus for the performance model and vice versa. The actual translation of stimuli occurring in the operational-specification model into a form suitable for interpretation by the performance model and vice versa is dependent on the modeling languages chosen. This implementation concern is discussed in Section 3.5. The second kind of information exchange occurs in the form of back-annotation of details from the performance model into the operational specification. In practice, the operational specification is usually incomplete. The incompleteness may be due to several reasons. First, the specifier may deliberately leave some details unspecified for the sake of implementation independence, such that the implementor has more freedom in selecting the implementation. Second, the specifier may not be able to realistically anticipate all design scenarios and hence unwittingly leave some design scenarios unspecified. Third, the specifier may choose to develop the specification gradually and iteratively. However, once the performance model is developed, information such as implementation-dependent timing delays become available. Incorporating such information into the execution of the operational specification will consequently result in more realistic operational scenarios, and

31

hence will generate more accurate predictions of system performance and behavior. Incorporation of timing estimates obtained during the execution of the performance model into the execution of the operational specification should be achieved while maintaining the implementation independence of the operational specification. We call such timing estimates functional timings. Incorporation of functional timing in an implementation independent manner will significantly increase the ease in comparing alternative implementations and in preserving the level of conceptual abstraction in the operational specification. This issue and our proposed solution is discussed in more detail in Chapter 4.

3.4 Detection of errors During the course of integrated simulation of the Statecharts and the ADEPT models, one can anticipate two kinds of errors. The first kind of error is detected when the two models fail to conform. This kind of error can occur either due to either an incorrect interpretation of the specification or an ambiguity in the specification. In either case, the two models will eventually exhibit behavior that do not correspond, since they effectively describe different systems. We call such errors conformance errors. The second kind of error occurs when the two models conform, but the designer discovers logical errors in the specification as a result of the integrated simulation. This kind of error is expected especially when the specifier fails to anticipate certain design scenarios that become apparent only after information from the performance model is incorporated into the execution of the operational specification. We call such errors logical errors. The scope of this dissertation is limited to the detection of errors identifiable from a simulation-based approach. If the simulation path avoids an erroneous design scenario, the error will possibly remain undetected.

Conformance errors One kind of anomaly occurs when the operational specification and the performance models do not conform during simulation. There are two reasons why the two models may fail to conform during simulation. These two reasons give rise to two categories of errors: incorrect-implementation errors and racing errors. In this dissertation, we do not address

32

the issue of differentiating between the two categories of errors automatically. The categorization of errors is presented for the sake of completeness and for possible guidance to the designer in eliminating such errors. The first reason for nonconformity between the two models is that the performance model of the system under design may not represent a correct implementation of the specification. Such incorrectness is typically due to an ambiguity in the specification itself. Incorrect implementation implies that there are certain operational scenarios where the two models fail to produce conforming output sequences. If the implementation is incorrect, there will be at least one operational scenario when the outputs produced by the implementation will be different from that predicted by the specification. If the performance model does not abstract away the difference between a correct and an incorrect output, the output generated by the performance model will differ from that predicted by the operational specification. If the simulation session encounters such scenarios, the sequence of outputs generated by the two models will not be analogous, and a violation of conformance should be flagged. We call such violations incorrect-implementation errors. The second reason for nonconformity between the two models is due to the race condition that exists between the operational specification and the performance model. The race condition is explained as follows. The externally-observable behavior of a reactive system is typically described using a combination of state changes and transitions. At the operational-specification stage, since the actual time to be taken by an implementation of the specification is usually unknown, the time associated with these state changes and transitions is usually ignored. Only the dependency relationships between these transitions and state changes are preserved, with the transitions themselves assigned zero time delays. On account of these zero delay transitions, the operational specification model typically takes a sequence of transitions earlier than when it’s performance model counterpart performs the analogous actions. Suppose the input scenario changes between the instant the operational specification takes a transition and the instant the performance model performs the analogous action. Being reactive in nature, and therefore in continual interaction with its environment, the system is required to react according to the changed input scenario. Consequently, the performance model will follow a simulation path not analogous to the one taken by the specification model, and eventually produce an output that is different

33

from what was predicted by the operational specification. This difference will be manifested as a conformance error. We call errors caused due to race conditions racing errors. A racing error occurs if the test-scenario generated by the environment of the system under design is unrealistic. Unrealistic test-scenarios occur when input events are generated by the environment faster than what the system is expected to handle. Another reason for racing error is that the specification itself is unrealistic. In this case, the operational specification may describe a system that is expected to respond to its environment faster than any realistic implementation can, causing a difference in the externally-observable behavior of the two models.

Handling conformance errors A conformance error is detected when the ADEPT model produces an output that was not predicted by the Statecharts model. Based on the outputs involved, the designer will be able to trace back the sequence of simulation events in both models until the source of mismatch is discovered. If the error is due to incorrect implementation, the designer fixes the error and proceeds with the remaining steps of the methodology. On the other hand, if a racing error due to an unrealistic input scenario is detected, the designer may modify the testing environment so that unrealistic input-patterns are not generated. If the specification itself is found to be unrealistic, modifications are necessary to the specification, and possibly to the proposed implementation.

Logical errors The second kind of anomaly occurs when the simulation proceeds along an unexpected path, thus exposing unanticipated behavior or inconsistencies in the specification. Inclusion of timings from the performance model can cause the simulation session of the operational specification to proceed along operational scenarios that would not have been encountered otherwise. Such operational scenarios may have been unanticipated at the operationalspecification stage. Detection of such errors usually takes place either by manual inspection, or by the system entering designated error states or producing diagnostic outputs as a result of violating some explicit constraint in the Statecharts.

34

Handling logical errors A logical error can typically be handled the same way as one would handle an error in the original specification. This would involve making changes in the Statecharts, reflecting these changes in the proposed implementation, and continuing with the remaining steps of the methodology. Both conformance and logical errors are difficult to detect. Given the complex nature of the systems under design today, a number of conformance errors can remain undetected until later stages unless some degree of automation and a methodological approach to error detection is provided. In the case of logical errors, unless an integrated-simulation environment is provided, the errors will possibly not be detected until much later. The delayed detection is expected since the design scenarios that were not anticipated in the specification will not arise in the execution of the specification alone. Such delayed detection of errors will result in costlier fixes and longer design cycles.

3.5 Implementing Integrated Simulation So far, we have generally discussed issues that need to be addressed while integrating any two stimulation-response based models. In this section, we specifically discuss the issues that need to be resolved in order to integrate a Statecharts and an ADEPT model.

3.5.1 Choice of a common simulation environment The modeling environments of Statecharts and ADEPT are quite different, since they describe different aspects of the system, and are based on different modeling languages and formalisms. This difference makes integrated simulation of a Statecharts model and an ADEPT model a nontrivial problem. One way to address this problem is to choose a common simulation-environment where both models can be executed. A common simulation environment will make it easier to translate stimuli generated from one model into a form suitable for the other model, thus enabling exchange of information among the models. For the common simulation-environment, we chose the VHDL modeling environment, which is the underlying language for ADEPT. The primary reason for choosing VHDL is

35

that it is a simulation environment suitable for representing and analyzing different modeling domains and abstractions. As a result, tools are readily available for converting Statecharts and ADEPT descriptions into equivalent VHDL descriptions. Once the models are translated into equivalent VHDL descriptions, the model interactions can be written as VHDL code. This approach retains the original design view for both Statecharts and ADEPT. We also considered choosing Statecharts as the unifying environment. In fact, we have developed preliminary rules to translate ADEPT primitives as well as the token-handshake protocol into Statecharts representations. Based on these rules, one can translate an ADEPT model into a Statecharts representation [Appendix D], so that the resulting Statecharts representation and the original ADEPT model are isomorphic. Executing the Statecharts representation of the ADEPT model has several drawbacks. First, the Statecharts representation resulting from the ADEPT model’s translation, being isomorphic to the original ADEPT model, is quite large. The largeness is due to the fact that an ADEPT model of a nontrivial system is typically composed of a large number of instances of the ADEPT primitives. The large size adds a significant overhead to the execution time of the Statecharts simulator. As a result, integrated simulation of the original Statecharts specification and the translated ADEPT model would be significantly expensive in terms of simulation-execution time. Since the Statecharts translations of the ADEPT primitives do not necessarily provide any clearer understanding of the primitives compared to the original ADEPT primitives, this expense is not justified. The second drawback is that the Statecharts modeling environment is primarily suitable for operational-specification modeling. In contrast, one of the primary advantages of the VHDL environment is its ability to model several aspects of a system, including operational specification and performance. Further, ADEPT environment supports integrated modeling with lower design stages using what is called the hybrid modeling approach[AYL92]. Our research, therefore, adds the operational specification modeling design stage to the list of design stages covered by hybrid modeling under the same umbrella. The increased simulation overhead and the inability to extend model continuity to later stages of design makes Statecharts environment a less-attractive choice as the unifying environment, compared to VHDL. We therefore decided to limit our scope of unifying environments to the VHDL environment.

36

Given VHDL as the unifying environment for performing integrated simulation of Statecharts and ADEPT models, we translate the Statecharts and ADEPT models into their equivalent VHDL models using existing tools [ILO92, SRI90]. Both Statecharts and ADEPT are at a higher level of abstraction and are developed specifically with their respective modeling domains in mind. It is therefore preferable that the designers are shielded from having to deal with VHDL-level programming as much as possible, to preserve the high-level conceptual elegance of the Statecharts and ADEPT models.

3.5.2 Exchange of information between Statecharts and ADEPT models During a simulation session, the stimuli for the Statecharts model occur in the form of broadcast events and changes in conditions. In ADEPT model, these occur in the form of token arrival or departure, and change in conditions. However, when translated to VHDL, the code representing the generation and consumption of these stimuli is based on VHDL signal assignments. The rules and the general techniques for such translations are presented in Appendix A. We are also concerned on transmitting timing information obtained from ADEPT model into the Statecharts model. We develop a technique called performance annotation to achieve this translation. Performance annotation involves making implementation independent modifications to the Statecharts so that performance-dependent timings are incorporated from the simulation of ADEPT model into the simulation of the Statecharts model. Performance annotation is described in Chapter 4. Both Statecharts and ADEPT models are translated into equivalent VHDL code. Both models are represented as separate VHDL components. The entity declaration of these VHDL components define the interface through which these components exchange information with the outside world. The interface is a collection of ports, where each port is associated with a VHDL signal. A stimulus generated by the model for its environment will appear as a event on the signal associated with the corresponding port. Similarly, any stimulus can be passed into the model by generating a VHDL signal on the corresponding port. The actual exchange of information is managed by a separate VHDL component, which we call the linking code. The linking code essentially monitors VHDL events occurring in ports of one model and generates appropriate events in the corresponding ports of the other

37

model.

3.5.3 Linking Statecharts and ADEPT models through the VHDL For a given component of a Statecharts specification, one has to identify what stimuli are exchanged with the outside world. These stimuli can be broadcast to other Statecharts components, or, through the linking code, to ADEPT models. There are two kinds of stimuli: events and conditions. Events are instantaneous stimuli that can be generated explicitly or generated due to exiting or entering a particular state. Conditions are persistent stimuli that occur due to change in values of variables or occupation of certain states. Any communication with the environment of the Statecharts component takes place using an event or a condition. These events and conditions can be called the linking points of the Statecharts component. An ADEPT model interacts with its environment using both input and output ports. Stimuli with the environment is exchanged by arrival, departure, or the status of the tokenhandshaking protocol on these ports. Analogous to the Statecharts model, these ports are called the linking points of the ADEPT model. Once the linking points of the Statecharts and ADEPT models are identified, the correspondence between the linking points should be determined. Since both models will be eventually translated into VHDL presentations, these correspondences should be mapped into VHDL as well. This mapping is presently achieved manually. For example, see Appendix A, Section A.4.

3.5.4 Developing test-bench A test-bench is developed to simulate the operational environment with which the system is expected to interact. There are three possible choices for developing the test-bench. The choices are to build the test-bench by using either Statecharts, or VHDL, or ADEPT. In Statecharts, one can create operational scenarios for mainly debugging purposes. There is not much support for obtaining performance estimates. Developing the test-bench directly in VHDL may be very tedious, since the designer would have to build the entire VHDL model from scratch. We chose the ADEPT environment since it facilitates the building of a test-bench by the use of its modeling primitives and its suitability to obtain perfor-

38

mance estimates. We therefore develop the test-bench using ADEPT. As a first step, we identify all the points of communication the system under design may have with its environment. These points of communication define the interface of the test-bench, through which it exchanges stimuli with the model of the system under design.

3.5.5 Identification of Statecharts partitions An important step in our methodology is to identify the components in the Statecharts suitable for replacement by analogous performance models. The components are identified by partitioning the Statecharts specification. Any Statecharts specification can be represented as a hierarchical tree. The leaf nodes are represented by basic states. The internal nodes are the superstates, which are made of one or more states. The constituent states of a node may be orthogonal or mutually-exclusive. We partition a Statecharts specification into its subtrees. The subtrees can be partitioned into further subtrees, and so on, until one reaches a basic state. Any subtree of the Statecharts tree can be chosen as a component. This implies that the corresponding performance model implements the behavior specified in the Statecharts specification described by that subtree. The interaction of the performance model with its external environment should be analogous to the interaction of the Statecharts component with the same environment.

3.5.6 Degree of automation In order to be effective, the methodology must allow the designer to concentrate on the design problem at hand rather than non design-related concerns. We will identify the extent to which the methodology steps can be automated, and describe complete rules to do so. The feasibility of automation and validity of these rules are demonstrated in the context of non trivial examples. In this dissertation, we are interested in demonstration of the feasibility and validity of the automation of these rules. The task of producing the software that embodies these rules is mostly an implementation concern that we relegate to future work.

39

3.6 Summary There are two broad tasks that are addressed in this dissertation. The first is to investigate how to integrate an operational specification and a performance model, both of which are based on the stimulus response paradigm. This investigation establishes the theoretical framework on which the methodology is based. The second task is to establish complete rules and guidelines for implementing the methodology in the context of Statecharts and ADEPT modeling environments.

40

Chapter 4 Functional Timing

Abstract

Functional timings describe implementation-dependent delays associated with reactive-system behavior. However, functional timings are traditionally ignored in reactive-system specifications. Ignoring functional timings often result in ambiguous operational specifications and unexplored operational scenarios, therefore reducing the effectiveness of any simulation-based validation of an operational specification. To include functional timing into the execution of an operational specification, the novel technique of performance annotation is presented. Using this technique, an operational specification developed using Statecharts can dynamically incorporate functional timings from the performance model of a proposed implementation of the system under design. Performance annotation is based on a set of generic rules of transformation applied to a Statecharts specification. These transformations ensure that the Statecharts model takes the same amount of time to complete the activity as is taken by its performance model counterpart. In addition, the original structure of the specification is preserved, thus maintaining the original specifier intent.

4.1 Introduction In reactive-system specifications, the notion of timing is important as a means to reflect real-world implementations. By incorporating timing information into the specification, more realistic results can be obtained from the simulation of the system. The timing information that can be specified falls into two categories: timing constraints and functional timings. A timing constraint specifies a temporal property that an implementation of the specification must satisfy. Several kinds of timing constraints can be specified for a system. For example, execution-timing constraints specify the amount of time allowed to execute a behavior. Data-rate constraints specify the rate at which a behavior can generate or consume data. Finally, inter-event timing constraints specify the minimum/maximum time allowed between the occurrence of two events. Timing constraints are specified in a Statecharts model using a combination of time-out

41

events and states. If, during simulation, the Statecharts model of the system under design fails to meet a timing constraint, the model typically enters into a specially designated error state or generates a diagnostic output. Since timing constraints are typically known by the time a specification is being developed, they are specified explicitly in the Statecharts model of the system under design. Notice that a timing constraint does not directly affect the actual time it takes the system to execute the specified behavior. For example, a constraint may specify that, for correct behavior, a transition must occur before three time-units expire after the system enters a given state. The actual time taken to implement that behavior, however, is not determined by the constraint, but rather, on the implementation. Functional timing is the amount of time spent by the system under design to change its state by taking an enabled transition. Transition is enabled once the trigger for that transition occurs. Compared to a timing constraint, the functional timing determines the time taken by a transition during a simulation session, thus directly affecting the subsequent simulation path. In a Statecharts model, functional timing can be specified using a combination of time-out events and scheduled events. Such timed events control the amount of time the Statecharts model spends in a specific state before it executes a transition. Functional timings are difficult to predict during the operational specification stage. Furthermore, the assumption of perfect synchrony hypothesis [BER91] motivates the specifier to ignore functional timings. Assuming this hypothesis leads to better composability and analysis of specifications. Our claim is that the lack of functional-timing information in the operational-specification model often results in a failure to detect several timing problems. If not detected at the stage of operational specification, these errors will permeate to lower-level design stages, where it is costlier to detect and fix such errors. It is preferable to detect these timing errors during simulation of the operational specification, as it is easier to comprehend and rectify the errors at the conceptual level of specification. Therefore, it is beneficial to be able to execute the operational-specification models with functional timings. In this chapter, we present a technique, called performance annotation, that enables the incorporation of functional timings dynamically into the execution of a Statecharts model. This technique modifies the Statecharts specification so that the execution of Statecharts transitions is delayed by the same amount of time as predicted by a concurrently-executing

42

performance model of the proposed implementation. The modifications made due to performance annotations preserve the original structure of the specification. Preserving the original structure enables the designer to remain at the conceptual level of the specification while considering the impact of decisions made at the design stage of performance modeling. The rest of this chapter is organized as follows. Section 4.2 discusses why functional timings are usually ignored in operational specifications. Section 4.3 discusses the ineffectiveness of executing Statecharts specifications that fail to incorporate functional timings. Section 4.4 illustrates the benefits of including functional timings into the specification. In section 4.5, we introduce the technique of performance annotation, followed by an example of the application of the technique in Section 4.6. Finally, we present the rules of applying performance annotation in Section 4.7.

4.2 Specification of functional timings in Statecharts During the execution of a Statecharts model, a transition can be executed without the real clock getting advanced. For example, in Figure 4.1, none of the transitions have any time associated with them. While such a transition does not consume any real time during the Statecharts simulation, any realistic implementation of that transition will indeed require a nonzero delay. Thus, lack of functional-timing information causes this apparent contradiction between specified and observed system behavior. producer_consumer consumer

producer / produce_data

P

produce_data/consume_data

S

Figure 4.1 Statecharts without functional timings

43

There are four common reasons for ignoring functional timing. First, information about actual delay involved in implementing a transition is usually unavailable during the specification stage. Second, since the structure of the specification and its implementation can be different, every transition in the Statecharts may not have a directly corresponding activity in the implementation. For example, the behavior specified by a group of transitions in the Statecharts may be implemented by a single action in the implementation. As a result, the time taken to individually perform equivalent behavior for each transition is not easy to estimate. This lack of correspondence makes the determination of the functional timing associated with a specific transition difficult. Third, since the underlying implementation of the specification may be relatively complex, the time taken can differ for different executions of the same transition. In particular, the time taken for each execution may depend on various parameters that do not remain constant between different executions. Fourth and finally, there is a significant tedium involved in specifying the transition times for individual specifications, especially in complex systems, with a large number of state transitions. Given above-mentioned reasons, the Statecharts specifier typically ignores the delay to execute a single or a series of transitions, which implies that the execution of transitions within the Statecharts are considered to occur instantaneously. Ignoring the delay for a transition is often acceptable since this allows the specifier to be focus only on specifying the intended sequence of state changes and outputs, without having to provide implementationdependent functional-timing information. In practice, most operational specifications ignore the delay in transitions based on the Perfect-Synchrony Hypothesis [BER91].

Perfect-Synchrony Hypothesis The convention of specifying transitions with no timing delays is based on the perfectsynchrony hypothesis. Under this hypothesis, a reactive system is specified as an ideal system whose outputs are produced synchronously with their inputs. In other words, the reactions in a synchronous reactive system take no observable time. Asynchronous systems, in contrast, require nonzero observable time to execute the reactions. The main advantage of adopting perfect synchrony hypothesis is the increase in simplification, elegance, and ease in describing, composing, and analyzing reactive systems. Further, sophisticated algorithms can take advantage of this hypothesis to produce highly-efficient implementations for such specifications.

44

A perfectly-synchronous reactive system is therefore purely passive and input-driven: it must wait for events coming from its environment and then react instantaneously to them. In Statecharts parlance, then, state transitions, event generation and event broadcasting are instantaneous. While no perfectly synchronous reactive system can exist in reality, it can be a reasonable approximation as long as the rate of change in input scenario is slower than the rate of outputs produced by the implementation. For example, in a clocked digital circuit, communication between components behave synchronously as long as the clock is not too fast. To summarize the discussion above, the functional timing associated with a Statecharts transition is usually ignored or is an approximation of the real delay involved in performing the equivalent activity in an implementation, which can result in less realistic simulation scenarios.

4.3 Effects of ignoring functional timing We now discuss how the assumption of perfect-synchrony hypothesis negatively affects the effectiveness of simulation-based validation approach for an operational specification. There are two negative effects of this assumption: • Ambiguity in specification: The system state depends on the functional timing. • Exclusion of operational scenarios: Several operational scenarios may never be tested during simulation. We demonstrate these negative effects using two example Statecharts specifications. The first example shows how two simulations with different functional timings can lead to different system states and outputs for the same input scenario. The second example shows how incorrect functional timings can leave certain scenarios unexplored during simulation. We now describe the first example. In Figure 4.2(a), state S is composed of mutuallyexclusive states S1 and S2. By default, S starts in state S1. The events a and b represent external stimuli that the system under design receives from its environment. Events ea and eb are events that are generated in S and are visible from other states as well as the environment. When the event a occurs, event ea is generated, and the Statecharts enters state S2.As a result of the occurrence of event

ea,

state

SB

is entered. Subsequently, when the external

45

event b occurs event eb is generated and state S1 is entered. Depending on whether the system under design is in S1 or S2, either event b or event a is respectively ignored. Notice that the time elapsed between the occurrences of events a and b and the corre-

S S1

(a)

b/eb

a/ea

S2

Input scenario: Default state:S1, event a followed by event b 1 clock unit later. Output scenario: Final state:S1, event ea followed by event eb 1 clock unit later

S S1 timeout(b,2) / eb

(b) timeout(a,2) / ea

S2

Input scenario: Default state: S1, event a followed by event b 1 clock unit later. Output scenario: Final state: S2,event ea Figure 4.2 Ambiguity in specification in the absence of functional timings sponding transitions in the Statecharts is zero in real clock time. As long as the events a and b

do not occur at the same instant, the simulation of the Statecharts will proceed exactly as

described above. If both events a and b occur at the same instant, either ea or eb will be generated, depending on whether the Statecharts was in state S1 or S2 respectively. For example, suppose the system under design is in state

S1

and the two events

a

and

b

occur

46

simultaneously, the Statecharts will end up in state S2 while generating only event ea. However, in any realistic implementation of the behavior described for the state S, some nonzero delay will be incurred between the occurrence of the events a and b and their corresponding transitions. Let us assume that it takes 2 clock ticks to generate the events ea and eb on receiving events a and b respectively. This is depicted in Figure 4.2(b) where we intro-

duce the functional timings using time-out events. For example, the transition from S1 to S2 occurs 2 clock ticks after event a is generated. As long as the events a and b occur with a minimum inter-arrival time of 2 clock ticks, the behavior of the system under design will be exactly as predicted by the Statecharts. Now consider the following scenario. Suppose the system under design is in state S1. If the event a occurs followed 1 clock tick later by the event b, the Statecharts would generate the events

ea

and

eb

and end up in state

S1.

The implementation, however, will not have

reached state S2 when event b occurs, and therefore it will end up in state S2 while generating only event ea. Thus, for the same input scenario, different functional timings led to different system states. Notice that the delay of

2

clock ticks is an implementation dependent parameter, and

hence it is unlikely that one could have assigned these delay values in the Statecharts. Thus, depending on the functional timing, the same specification can predict different output scenarios, and result in an ambiguity in the specification, which ignores functional timing. In our second example, described in Figure 4.3, we show a case where not knowing the correct functional timing can result in certain simulation scenarios never getting executed. We first discuss the execution of the Statecharts SS in the absence of any functional timings. Lack of functional timings implies that the transitions take zero delay. First, the transition from SA to SB is made, and the event a is generated. The generation of event a triggers the transition from state

S1

to

S2,

generating the event o. The next step is the transition from

state SB to SD. The state SC is never entered. In short, if we ignore the functional timings, we will always end up in the state , provided the events a, o and c are not generated from the environment of SS. Now consider the case where there is a nonzero delay between the occurrence of an event trigger and the resulting action. For example, let us assume that the transition from S1 to S2 takes 4 nanoseconds. In other words, on the occurrence of the event a, the proposed

47

implementation actually takes

4

nanoseconds before the transition is made to

S2

and the

event o is generated. Now let us try to execute the Statecharts specification. Like the zero delay case, the Statecharts enters SB and waits for either a time-out or the event c to occur. Since the event o will not be generated until 4 ns are up, the time-out takes place after 3 ns, causing the system to enter state SC, which is different from the earlier case where no delays were associated. SS SS2

SS1 S1

SA

/a SB

a/o

tmout(a,3 ns)/b

o/c

S2 SC

SD

State is never entered during simulation. Figure 4.3 Unexplored simulation scenario in the absence of functional timings If there are no stimuli external to the states SS1 and SS2, SS will always end up in state while producing outputs o and c, as long as it took less than 3 time units to execute the transition between S1 and S2. Clearly, if the transition between states

S1

and S2 took a

delay of more than 3 ns, the Statecharts would end up in state instead with o and b as outputs. Clearly, lack of any functional timing would result in the Statecharts failing to predict that the system will ever occupy state SC. Notice that the delay between the generation of event a and consequently event o is an implementation-dependent parameter, and hence it is unlikely that one could have assigned these delay values in the Statecharts. Ignoring functional timing in this case caused the simulation to ignore a valid operational scenario that should have been considered for validation.

48

4.4 Incorporating functional timings into specification In this section, we provide a simple example that further illustrates the effect of incorporating functional timings into the Statecharts description. Consider the Statecharts described in Figure 4.4. The system under design is a hypothetical insulin administering STATUS DISPENSER_STATUS

SYSTEM_STATUS

PATIENT_STATUS

IDLE INACTIVE

NORMAL

mon_off ACTIVE

CRITICAL

tmout(ala

alar EVALUATE

[in(NORMAL)]

mon_on

rm,10)/mo

m/m

n_off

on_

on

alarm

tmout(alarm, 5)/fail inject ERROR

alarm /inject

INJECTING

READY

Figure 4.4 Statecharts for a patient monitoring system system for diabetic patients. Whenever the patient’s blood sugar becomes too low, it is the job of the system to administer controlled amount of insulin into the patient’s body till the blood sugar returns to normal. The Statecharts specification is described now in more detail. We assume that there is a separate device that monitors the patient’s blood sugar at regular intervals, and generates the event

alarm

orthogonal

whenever the patient is dangerously low in blood sugar. There are three

components

of

the

specification,

namely,

the

DISPENSER_STATUS,

PATIENT_STATUS, and SYSTEM_STATUS.

We first describe PATIENT_STATUS. By default, the patient is in NORMAL state, representing the state when the blood sugar of the patient is in an acceptable range. The mon_on event signifies that the patient’s blood sugar has become abnormally low, and that the patient

49

should be kept under close observation. On receiving the event

mon_on,

the status of the

patient is switched to critical, and the system enters the CRITICAL state. On the other hand, the event mon_off signifies that the patient’s blood sugar has been normal for a while, and the system enters NORMAL state. PATIENT_STATUS

represents the status of the patient, as perceived by the system. By

default, the patient is considered to be in NORMAL state. While in NORMAL state, on receiving the

mon_on

event, the patient’s status is now considered

CRITICAL.

While in the

CRITICAL

state, if an inject event is received, the patient switches to INJECTING state when the injection is administered. The patient then reenters CRITICAL state and waits for further inject events to occur. If a mon_off event is received while in CRITICAL state, the patient is out of danger and returns to NORMAL state. DISPENSER_STATUS

represents the state of the component that determines when and by

what amount should the insulin injection be administered. By default, the dispenser starts in IDLE state. On receiving an alarm event, it generates the mon_on event and enters EVALUATE state. When in EVALUATE state, the dispenser determines the required dosage. The dispenser then generates an inject event and enters READY state. On occurrence of further alarm events while in READY state, it repeats the cycle of entering EVALUATE state, generating inject event and then reentering READY state. If no alarm event occurs for 10 clock ticks, the dispenser reenters IDLE state. SYSTEM_STATUS

monitors the overall state of the system. It specifies a specific timing

constraint between the event alarm and condition in(NORMAL). If the patient does not return to a NORMAL state within 5 clock ticks, the system is considered to have failed. A fail event is generated and the system enters the ERROR state. Notice that no functional timings have been specified in the Statecharts. What has been specified is the sequence of states the system must occupy and a sequence of outputs the system must produce in response to external stimuli. Consider the case when the designer proposes an implementation for the dispenser. In order to study the performance characteristics of the implementation, a performance model is developed. If seen as a black-box, the performance model will have one input port representing the alarm event, and three output ports representing the events mon_on, mon_off, and inject.

50

There are two main concerns at this point. First, whether the implementation conforms to the specification, and second, what kind of simulation scenarios emerge when one incorporates the functional timings from the performance model of the implementation into the Statecharts. In order to check for conformance of the implementation, one has to check whether the performance model produces outputs that correspond to the ones produced by the Statecharts component. For example, if on receiving an alarm event, the Statecharts produced an mon_on

event followed by an inject event, whereas the performance model produced only an

event, the implementation is clearly faulty, and the error must be detected by the con-

inject

formance checking mechanism. In order to study the effect of incorporating the functional timings obtained from the performance model into the Statecharts, consider the following scenario. Suppose the time taken to calculate the required dosage is 6 clock ticks, i.e., the dispenser occupies state EVALUATE

for 6 clock ticks on receiving the

alarm

event. This would imply that the

inject

event would not be generated for at least 6 clock ticks. Without the injection, the patient will not enter the NORMAL state in 6 clock ticks, thus violating the explicit constraint resulting in the system under design entering

ERROR

state. Clearly, one needs to restrict the

amount of time spent in the EVALUATE state or relax the maximum timing constraint between alarm

event and in(NORMAL) condition. This is a case where the implementation did not vio-

late the specification, but incorporation of the functional timing exposed a weakness in the specification.

4.5 Performance Annotation In our approach, we simulate performance and Statecharts models concurrently, with both interacting with the same environment. In order to check for conformance between the two models, we need to make sure that both models are subjected to identical input scenarios provided by the environment. These models represent the same component of the system under design. Only one of the models interacts with the rest of the system under design. In order to maintain identical input scenarios for both models, it is not enough to make sure that the same input sequence is passed to both models. The models have different asso-

51

ciated timings, and their outputs, even if analogous, can be produced at different times. This difference in timing implies that we have to choose which model will have its outputs interact with the environment. Since the performance model has more accurate information on functional timing, the timings of the outputs produced by performance will be more realistic compared to those of the Statecharts. Hence it is more realistic to allow outputs of performance model to interact with the environment. The outputs of the Statecharts model are used to check if the performance model produced a sequence of outputs that the former model produced. Since the environment can be sensitive to the output event and when it is generated, it can change the input scenario of the models. In that case, by the time the slower performance model catches up to the state analogous to the one where the Statecharts model reacted to an input, the former model can find the input changed from what the Statecharts experienced. Therefore, it is clear that the faster model must wait till the slower model produces the analogous output to maintain same input scenarios for both models. In the present case, the Statecharts model ignores functional timings by substituting zero delays for time to execute a transition. To perform analogous action, the performance model takes nonzero time. As a result, the Statecharts model generates analogous events faster than its performance model counterpart. Notice, however, that the input scenario can change independently of the output produced by the performance model. In such a case, the two models must still conform. If the models produce outputs sequences that do not conform, an error must be flagged by the implementation. We now present a simulation based technique, called performance annotation, that incorporates functional timings into a Statecharts model from an performance model of the same system under design. The technique involves synchronization of thee execution of the two models. Incorporation of functional timings from performance model into the Statecharts allows discovery of timing problems that are not apparent from a simulation of a Statecharts with less accurate functional timings. In addition, we also validate the performance model for correct behavior against its Statecharts counterpart

52

4.6 Example of performance annotation Before presenting the algorithm that transforms a given Statecharts into its annotated form, we describe the effects of the performance annotation using a simple example of a generic monitor. The behavior of the generic monitor is described in the Statecharts description of Figure 4.5. When turned on, the task of the monitor is to broadcast a notification every time an spe-

MONITOR

MONITOR_OFF

TURN_ON

TURN_OFF

MONITORED_EVENT / NOTIFY MONITOR_ON

Figure 4.5 Statecharts for monitor before annotation

cific event occurs. When turned off, the monitor does not broadcast any notifications. There are two mutually exclusive states in which the monitor can exist: MONITOR_OFF or

MONITOR_ON. MONITOR_OFF

MONITOR_ON

state signifies that the monitor is off. Conversely,

signifies that the monitor is on, and actively monitoring events of the type

MONITOR_EVENT.

By default, the monitor starts in the MONITOR_OFF state, indicating that it is off. When the monitor is in MONITOR_OFF state, generation of the event TURN_ON causes the monitor to become active by entering

MONITOR_ON

state. Likewise, when the monitor is in

MONITOR_ON state, generation of event TURN_OFF causes the monitor to become inactive by

entering the MONITOR_OFF state. When the monitor is in MONITOR_ON state, it waits for MONITOR_EVENT to occur. Once the event

MONITOR_EVENT

occurs, the monitor exits the

MONITOR_ON

state, generates the

53

broadcast event NOTIFY, and then reenters MONITOR_ON state. This process is repeated until the event

TURN_OFF

MONITOR_OFF

occurs, when the monitor leaves

MONITOR_ON

state and enters

state.

According to the Statecharts description, no real time elapses between the occurrence of the event MONITOR_EVENT and the generation of the BROADCAST event NOTIFY is zero. In other words, there was a zero delay involved between the two events. However, for realistic systems, clearly a nonzero delay will occur between the two events NOTIFY.

MONITOR_EVENT

and

This delay information is provided by the performance model, and is incorporated

into the Statecharts using the technique of performance annotation. MONITOR MONITOR_ON_NOTIFY_HANDLER MONITOR_ON_NOTIFY_SYNC_WAIT MONITOR_ON_NOTIFY_ACK MONITOR_ON_NOTIFY_RFA

MONITOR_OFF

MONITOR_ON_NOTIFY_SYNCED

TURN_OFF

TURN_ON

MONITOR_ON_NOTIFY_ACK

MONITOR_ON_NOTIFY_ERROR

MONITOR_ON en(MONITOR_ON_NOTIFY_SYNCED) / NOTIFY MONITOR_ON_AUX MONITORED_EVENT / MONITOR_ON_NOTIFY_RFA

MONITOR_ON_WAIT

Figure 4.6 Statecharts for monitor after performance annotation Figure 4.6 depicts the performance-annotated Statecharts. Performance annotation results in modifying the Statecharts so that the there is a nonzero delay between the occurrence of event

MONITORED_EVENT

and the generation of the event

NOTIFY.

This nonzero

delay occurs as the annotated Statecharts delays the generation of NOTIFY until the corresponding performance model of the monitor generates an event analogous to the

NOTIFY

event. Once the performance model generates this analogous event, the Statecharts model

54

generates the

NOTIFY

event. The modifications are made in such a way that any existing

dependency relationships between the Statecharts and its environment are preserved. There are three major modifications made to the existing Statecharts. The first modification is the addition of the orthogonal state MONITOR_ON_NOTIFY_HANDLER, which is further

composed

of

the

MONITOR_ON_NOTIFY_SYNCED,

and

substates

MONITOR_ON_NOTIFY_SYNC_WAIT,

MONITOR_ON_NOTIFY_ERROR.

The second modification

is the decomposition of the basic state MONITOR_ON into two substates: MONITOR_ON_AUX and MONITOR_ON_WAIT. Finally, the third modification is the definition of two unique broadcast events: MONITOR_ON_NOTIFY_RFA and MONITOR_ON_NOTIFY_ACK. MONITOR_ON_NOTIFY_RFA

notifies the simulation environment that the Statecharts is

waiting for the performance model to generate the event analogous to MONITORED_EVENT. Once this analogous event is generated by the performance model, the simulation environment generates MONITOR_ON_NOTIFY_ACK. The state MONITOR_ON_NOTIFY_HANDLER is used to synchronize between these two events. If MONITOR_ON_NOTIFY_RFA,

the

Statecharts

MONITOR_ON_NOTIFY_ACK

indicates

an

error

occurs before by

entering

MONITOR_ON_NOTIFY_ERROR.

The steps involved in executing the annotated Statecharts are described as follows. Assume that the monitor has already entered the MONITOR_ON state, and that it occupies the default substate

MONITOR_ON_AUX.

The orthogonal state

MONITOR_ON_NOTIFY_HANDLER

is

in MONITOR_ON_NOTIFY_SYNCED, which is its default substate. When the event MONITORED_EVENT occurs, event MONITOR_ON_NOTIFY_RFA is generated and the Statecharts enters MONITOR_ON_NOTIFY_RFA,

MONITOR_ON_WAIT

the orthogonal state

state. On generation of event

MONITOR_ON_NOTIFY_HANDLER

enters the

substate MONITOR_ON_NOTIFY_SYNC_WAIT. When the performance model counterpart completes its execution and generates the event

analogous

to

MONITOR_ON_NOTIFY_ACK.

NOTIFY,

As a result,

the

simulation

environment

MONITOR_ON_NOTIFY_SYNCED

generates

is entered, which in

turn causes exiting of MONITOR_ON_WAIT and generation of the event NOTIFY. Finally, we conclude this section with the observation that the basic structure of the original Statecharts is still preserved. This preservation is in the sense that the dependencies of the states MONITOR_ON and MONITOR_OFF with respect to preexisting states and

55

events are still maintained.

4.7 Rules for performance annotation The purpose of performance annotation is to modify a Statecharts such that the output event produced is synchronized with the production of an analogous output by the performance model. Specifically, whenever we come across a transition in the Statecharts that produces an output event, we effectively delay the transition until the performance model generates the analogous output. In Figure 4.7 (a), we show a typical Statecharts transition that produces an output event. σ1

Σ1 Γ/σ1_α_rfa

σ1 Γ/α;β

σ1_α_wait

en(σ1_α_synced)

σ2 (a)

/α; β;

σ2

σ1_α_handler σ1_α_sync_wait σ1_α_ack

σ1_α_rfa

σ1_α_synced σ1_α_ack σ1_α_error

(b) Figure 4.7 Transformations for performance annotation

56

For convenience, we call this transition τ. The source and destination of the τ are σ1 and σ2 respectively. The trigger expression of τ is Γ. Γ can be any combination of events and conditions. The action expression of τ can be any combination of events and assignments to variables. We restrict our outputs to be only consisting of events, and not variable assignments, since each new variable assignment can be communicated to the outside world via an event. Suppose α is the output event that we are concerned with. The rest of the action expression can be represented by β. We now enumerate all the possible transitions that can possibly occur as a result of execution of t and generation of a. To be effected by a, the trigger expression of a transition has to contain at least one of the following as its sub-expression: α

event, generated when τ is taken

ex(σ1)

event, generated since σ1 is exited

en(σ2)

event, generated since σ2 is entered

in(σ1)

condition, becomes false, was true

in(σ2)

condition, becomes true, was false

Given the above-enumerated list, care has to be taken that the causality relationships between the transitions is not violated as a result of the modifications made to the Statecharts. The modifications made to the Statecharts are now explained. In Figure 4.7(b), we show the transformations made to the original specification. There are two major group of modifications to the existing Statecharts specification. The first group involves introducing an extra wait state for the Statecharts to wait till the analogous output is produced in the performance model. The second modification involves introducing an orthogonal state σ1_α_handler that explicitly handles the synchronization and detects if an error occurred in synchronization. The orthogonal state σ1_α_handler is composed of three eXclusive-OR (XOR) states: σ1_α_synced default state, occupied when both models are synchronized σ1_α_sync_wait occupied when the Statecharts is waiting for an acknowledgment from performance σ1_α_error occupied when an acknowledgment for α arrives from performance before the Stat-

57

echarts generated its request for acknowledgment The following are the events that are relevant when in state σ1_α_handler: σ1_α_rfa generated when the Statecharts is waiting for the analogue of α to be produced by performance model. σ1_α_ack generated when the performance model produces the analogue of α. This is an event external to the Statecharts model, and produced by the integrated-simulation environment. The remaining modifications are explained as follows. Instead of σ1, we now have a state Σ1, which is identical to σ1, except for transition τ. By identical, we mean that all transitions other than τ that have σ1 as a source or sink now have Σ1 have their source or sink respectively. When Γ evaluates to true, the event σ1_α_rfa is generated and σ1_α_wait state is entered. The event σ1_α_rfa broadcasts the event that the original Statecharts would have produced α at this point, and the modified Statecharts will be waiting for the analogous output to occur in the performance model before it generates α. Once the performance model generates the analogue of α, en(σ1_α_synced) is eventually generated. This is followed by the execution of actions of τ, including the generation of event α.

4.8 Summary Functional timing is important part of system specification. Specifically, it is crucial in obtaining accurate and realistic simulation scenarios. However, functional timing information can typically only be obtained from the performance modeling design stage, which occurs later than specification modeling stage. Performance annotation modifies the Statecharts specification in such a way that the execution of the Statecharts model and the performance model for the same system under design is synchronized. The modifications allow functional timings obtained from simulating performance model to be incorporated into the Statecharts model. Performance annotation does not introduce any implementation dependent changes into the Statecharts.

58

In the context of Statecharts, Ideally, we should incorporate functional timing for all transitions, but since they may not have a mapping due to differences in internal structure, they cannot be assigned. We detected the minimum requirement for synchronization, i.e., the output has to be synchronized, since only one of them interact, and that interaction can decide the next event coming from the environment.

59

Chapter 5 Conformance

Abstract

We precisely define conformance between a Statecharts model and an ADEPT model for any given system under design. We then state certain design assumptions that are reasonable in the context of reactive-systems design. The design assumptions arise from two observations: that Statecharts model lacks performance-related timing information that is available to ADEPT, and that dependencies in behavior should be explicitly stated. Based on these design assumptions, we present an algorithm for detecting conformance between the models during simulation. Under the stated design assumptions, we prove the correctness and completeness of the algorithm. For each simulation step, the algorithm adds an overhead that is linear in the number of output events that Statecharts model communicates to its environment.

5.1 Introduction One of the primary concerns of a system designer is to produce implementations that conform to the specification of the system under design. To validate whether a proposed implementation will indeed conform to its specification, one can check for conformance between the specification model and the performance model of the proposed implementation. If the models do not conform, clearly there is a discrepancy between the specification and the proposed implementation. Conversely, if the two models conform, there is an increased likelihood that the proposed implementation is indeed a correct interpretation of the specification. One can validate conformance between the two models by using either an analytical or a simulation-based approach. As analytical approaches for checking conformance are generally considered impractical for large and complex system models, this dissertation concentrates on a simulation-based approach. Specifically, this approach focuses on checking conformance between a Statecharts and an ADEPT model. To check for conformance, outputs generated by the two models are monitored and

60

compared during simulation to see if they agree with each other. If the two models conform, the ADEPT model will produce outputs that correspond with the outputs predicted by the Statecharts model. This approach is similar to comparison checking[BKA90] used in the context of software-design diversity[RMB81], and is also called back-to-back testing[VHT86] and automatic testing [SE86]. However, checking whether the models correspond with respect to their outputs is difficult. This is because the corresponding outputs can be produced at different times, and possibly in different order by the two models. The differences in the output-production times and output order can be ascribed to the disparity in performance-related information available to the Statecharts and ADEPT models. The Statecharts specification lacks timing information for several reasons. As described in the previous chapter, the Statecharts specification is based on the perfect-synchrony hypothesis [BER91], which implies that the Statecharts model reacts instantaneously to environmental stimuli. Further, the specifier ignores functional timings due to the following reasons: unavailability of performance-related data, preservation of implementation independence, and level of abstraction. The performance model, in contrast, has timing information. Consequently, the Statecharts model generally produces outputs instantaneously, whereas the concurrently-executing performance model produces analogous outputs later, after a nonzero delay. A conformance-checking mechanism must constantly monitor all the outputs produced by both models to make sure analogous events are produced from the two models in the correct order. Order correctness implies that the following two conditions must be satisfied: 1. The ADEPT model produces an output after the Statecharts model generates the analogous output, and 2. The dependencies between the Statecharts outputs are preserved in the set of analogous outputs produced by the ADEPT model. To determine whether the first condition of order correctness is satisfied, the conformance-checking mechanism must ensure that for every output generated by the ADEPT model, there is a corresponding Statecharts output that has occurred earlier. If an ADEPT output is produced before a corresponding Statecharts output is generated, an error is indicated. To satisfy the second condition of order correctness, the conformance-checking

61

mechanism must ensure that if two Statecharts outputs depend on one another, their analogous ADEPT outputs are produced in the same order. If this order is violated, an error should be indicated. For example, suppose the Statecharts model of an airplane pilot seat produced the event eject-seat followed by the event open-parachute as outputs. If the ADEPT model reversed the order of the events produced, a potential disaster will be predicted. However, if two Statecharts outputs do not depend on each other, then the performance model should be able to produce these events in an order different than what the Statecharts predicted. This difference across models in the order of produced outputs is a case of nondeterminism. Such nondeterminism is hard to eliminate during the specification stage without making the specification implementation-dependent. As an example of nondeterminism, consider a hypothetical patient-monitoring device which measures the patient’s heart-rate, body-temperature, and blood-pressure every halfhour interval. Assume that the three conditions are of equal importance, i.e., it does not matter in which order they are measured, as long as they are all measured within a given time period. In the absence of functional timing, the Statecharts will execute all these tasks in zero-delay, in an order that depends on how many simulation cycles each task takes to complete. However, the performance model may execute these tasks in a different order, depending on how long it takes to execute each task. Since the relative ordering of the completions of these tasks is unimportant, both Statecharts and ADEPT models generate valid ordering of task completion, as long as they are all completed within a given time period. In the example above, the observed divergence in the behavior of the two models represents the case where although there is no error in the individual models, following the Statecharts would predict a sequence of outputs different from what the ADEPT model would suggest. Clearly, the path followed by the ADEPT model is more realistic, considering that it has performance-timing information not specified in the Statecharts model. Since no dependencies were violated, the conformance-checking algorithm should not indicate errors in such cases.

Conformance checking We now briefly describe our basic approach to conformance checking. As discussed

62

above, there is an implied nondeterminism in the Statecharts specification since it lacks functional timing. To make sure that the Statecharts model follows the same choices as its corresponding ADEPT model, we use the performance-annotated Statecharts model for comparing the outputs from both models. Let SC represent the performance-annotated version of a Statecharts model of the given system under consideration for design. We can view the Statecharts model as a black-box with its input and output ports for communication with the outside world. For example, in Figure 5.1(a), we have the performance-annotated Statecharts representation of the generic monitor introduced in Chapter 4. In Figure 5.1(b), SCMONITOR represents the corresponding black-box with TURN_ON, TURN_OFF, MONITOR_EVENT as input ports, and NOTIFY as the output port. The ports marked with dashed lines: NOTIFY_ACK and NOTIFY_RFA represent Statecharts events that are used to synchronize the Statecharts with the performance model to incorporate functional timings. NOTIFY_ERR indicates that the condition of order correctness has been violated between NOTIFY_RFA and NOTIFY_ACK. Given this black-box view of the Statecharts model, the basic idea behind implementing performance annotation is to delay the generation of output event on a port of SC until the analogous output occurs in the corresponding ADEPT model. In the remainder of the chapter, we let AC represent the ADEPT component of a proposed implementation of SC. An output on a port of the SC occurs as the result of a transition from one state to another in the SC. We call the former state a source state for that output. In the given example, the source state for the event NOTIFY is MONITOR_ON_AUX. When SC is in a source state, we call the source state active. Otherwise we call the source state inactive. During simulation, on entering a source state, SC generates a request for acknowledgment (rfa) event for the corresponding output. SC remains in the source state until the corresponding acknowledgment (ack) event is generated. The ack event is generated by the integrated simulation environment when the analogous output is generated by the ADEPT model. Once this ack event occurs, SC leaves the source state, and generates the corresponding Statecharts output event as a result of the state transition. Further, if the ack events arrives before a corresponding rfa event is generated, and error(err) event is generated. For example, the source state MONITOR_ON_AUX is inactive by default. When the event TURN_ON occurs, followed by the event MONITORED_EVENT, the source state becomes active and remains so until the Stat-

63

MONITOR MONITOR_ON_NOTIFY_HANDLER MONITOR_ON_NOTIFY_SYNC_WAIT

NOTIFY_RFA

NOTIFY_ACK

MONITOR_OFF MONITOR_ON_NOTIFY_SYNCED

TURN_OFF

NOTIFY_ACK / NOTIFY_ERR

TURN_ON

MONITOR_ON_NOTIFY_ERROR

MONITOR_ON en(MONITOR_ON_NOTIFY_SYNCED) / NOTIFY MONITOR_ON_AUX tm( TOKEN_ARRIVED, 100) / NOTIFY_RFA

MONITOR_ON_WAIT

(a) Statecharts for monitor after performance annotation

NOTIFY_RFA

NOTIFY_ACK

TURN_ON TURN_OFF

NOTIFY SCMONITOR

MONITORED_EVENT

NOTIFY_ERR

(b) Black-box representation of the performance-annotated Statecharts Figure 5.1 Statecharts for monitor after performance annotation and its blackbox representation echarts reenters the state MONITOR_ON_NOTIFY_SYNCED. The corresponding rfa, ack and err events are named NOTIFY_RFA and NOTIFY_ACK and NOTIFY_ERR respectively. Given the concurrent nature of a reactive system, it is quite possible that there are more

64

than one source state for a given output event. In the generic monitor example, one can imagine that there are other monitors that execute concurrently and generate the

NOTIFY

event whenever the events they monitor occur. In such cases, there will be multiple active source states waiting for the same event NOTIFY_ACK to occur. However, when the performance model generates the analogous output, it corresponds to exactly one of these monitors. Having multiple active sources will therefore result in ambiguity in determining which of the active sources corresponds to an ack event. To remove this ambiguity, we develop an algorithm EliminateOrthogonalSources(). This algorithm is presented in Section 5.7 and describes how to translate any SC to a semantically equivalent Statecharts model that can have at most one active source for a given output. The basic idea used in the algorithm is to rename the Statecharts output event at each source and update the rest of the SC with the renamed event. In the remainder of this chapter, unless specified otherwise, it is assumed that for every output, SC has at most one source state that can be active at any time. To check for conformance between SC and AC, all possible sequences of rfa and ack events that appear across the ports of SC must be considered. In other words, our problem is to develop an algorithm that efficiently checks all possible sequences of rfa events generated in SC against the corresponding sequences of ack events generated due to AC. The algorithm must correctly identify cases when these sequences conform and raise an error condition when they mismatch. Checking whether the rfa and ack event sequences conform is difficult. As discussed earlier, the rfa and ack events can be produced at different times, and in a different order. In order to develop an efficient checking mechanism, we first derive some properties of both rfa and ack sequences, based on design assumptions that we consider reasonable. These properties combined with those of the EliminateOrthogonalSources() algorithm are used to develop an algorithm which has a linear overhead in the number of outputs at each step of simulation. We prove the algorithm’s correctness and show its completeness under the given design assumptions. Completeness of this algorithm implies that if the design conditions are satisfied, all violations of conformance that can occur during the simulation are detected. The chapter is organized as follows. In Section 5.2 we develop a framework and pre-

65

cisely define the problem of detecting conformance within that framework. In Section 5.3 we state our design assumptions. Based on these design assumptions, in Section 5.4 we derive certain properties of the generated sequence of rfa and ack events. In Section 5.5 we propose the conformance-checking algorithm. In Section 5.6 we prove that given our definition of conformance, and under stated design assumptions, this algorithm correctly checks for conformance for all cases during simulation. Finally, in section 5.7, we first present algorithm EliminateOrthogonalSources(). We then prove that the conformancechecking algorithm has a linear-time overhead on the number of outputs of the original Statecharts model.

5.2 Definitions As noted in Section 1.4, we use the VHDL simulation environment as a common underlying platform for executing both SC and AC. The VHDL simulation environment is based on a stimulation-response paradigm; when there is a stimulus to the model, the model responds and then waits for more stimulus. During model execution, each execution step consists of responding to the generated stimuli from the previous execution step and generating further stimuli. A simulation session is a sequence of such model-execution steps. In order to model the behavior of the system under design over time, the simulation environment maintains a simulation clock. A step is said to take delta time if the simulation clock is not advanced as a result of executing that step. The step taking delta time is called a delta step. In order to model the behavior of a system over time, the simulator uses the abstraction of simulation time, defined as a tuple t = ( t r, t d ) . t r represents the real time, which is the amount by which the simulation clock has advanced since the beginning of the current simulation session. t d represents the delta time, which is the number of delta steps executed so far. Given two simulation times t = ( t r, t d ) and s = ( s r, s d ) , we define the operations =, < and ≤ as follows: We define simulation event, or simply, an event, as an external or internal stimulus that occurs instantaneously during a simulation. Given two events e1 and e2 that respectively occur at simulation times t1 and t2, we say e1 precedes e2 or e1 occurs before e2 if t 1 < t 2 .

66

t = s iff ( ( t r = s r ) and ( t d = s d ) ) t < s iff ( ( t r < s r ) or ( t r = s r ∧ t d < s d ) ) t ≤ s iff ( ( t < s ) or ( t = s ) ) A rfa event r in SC is defined by a tuple (portid, timeStamp). portid identifies the specific output port in SC where the corresponding Statecharts output event will appear as a result of the corresponding transition. timeStamp is the simulation time at which the rfa event r occurred. The following functions are defined on any rfa event r: portID(r) returns value of the portid field. time(r) returns the simulation time at which r occurred. realTime(r) returns the real time component of r’s timeStamp field. deltaTime(r) returns the delta time component of r’s timeStamp field. source(r) For a rfa event r, represents r’s corresponding source state ack (r) For a rfa event r, denotes r’s corresponding ack event. If r is never acknowledged, ack(r) is undefined. defined (r) For a rfa event r, returns true if r is defined. Otherwise, it returns false. statechartEvent(r) returns the name of Statecharts event that corresponds to r. Functions defined on any two rfa events r1 and r2 havePrecedence(r1, r2) Given two rfa events, r1 and r2, havePrecedence(r1, r2) is true iff r1 must precede r2. In other words, the semantics of the specification dictates that event r1 must precede event r2. Similar to a rfa event, an ack event a is also defined by a tuple (portid, timeStamp). portid identifies the corresponding output port in SC where the ack event a eventually causes an output to occur. timeStamp is the simulation time at which the ack event a

67

occurred. Functions defined on any ack event a: portID(a) returns value of the portid field. time(a) returns the simulation time at which a occurred. realTime(a) returns the real time component of a’s timeStamp field. deltaTime(a) returns the delta time component of a’s timeStamp field. rfa(a) For an ack event a, returns a’s corresponding rfa event. If a has no corresponding rfa event, rfa(a) is undefined. Observe that r = rfa(ack(r)), if ack(r) is defined a = ack(rfa(a)), if rfa(a) is defined defined (a) For an ack event a, returns true if a is defined. Otherwise, it returns false. Functions defined on any two ack events a1 and a2: conformsPrecedence(a1, a2) Given two ack events, a1 and a2, where r1 = rfa(a1), r2 = rfa(a2) conformsPrecedence(a1,a2) returns true iff the following conditions are true: Conformance of Precedence Condition 1(CPC1): Both r1 and r2 are defined, i.e., defined ( r 1 ) and defined ( r 2 ) Conformance of Precedence Condition 2(CPC2): If it matters whether r1 or r2 is produced first, AC must produce ack(r1) and ack(r2) in the same order as r1 and r2 are produced, i.e., ¬havePrecedence ( r1, r 2 ) or havePrecedence ( r1, r 2 ) and precedes ( a 1, a 2 ) An rfa set for SC is a set of rfa events generated by SC. An ack set for AC is a set of ack events generated by AC. An ack set A conforms to an rfa set R iff the following three conditions hold: Conformance Condition 1(CC1):

68

For every member of the rfa set R, the corresponding ack event must be a member of the ack set A, i.e., ∀r ∈ R ( ack ( r ) ∈ A ) Conformance Condition 2(CC2): For every member of A, the corresponding rfa event must be a member of the rfa set R, i.e., ∀a ∈ A ( rfa ( a ) ∈ R ) Conformance Condition 3(CC3): The precedence relationships between rfa events in R are maintained by their corresponding ack events in A, i.e., ∀a, b ∈ AconformsPrecedence (a,b) For any simulation instant t, let us define the following: R ( t ) = { r r is a rfa event and time ( r ) ≤ t } A ( t ) = { a a is an ack event and time ( a ) ≤ t } R(t) and A(t) are respectively the set of all rfa and ack events that occurred until simulation time t. ConformsUntil(SC, AC, t): A ( t ) conforms to R ( t ) In other words, Given a SC and an AC, ConformsUntil(SC, AC, t) is true iff the ack set A(t) conforms to the rfa set R(t).

The problem can now be formally stated as follows.

Develop a conformance checking algorithm that determines, at any simulation time t, if ConformsUntil(SC, AC, t) is true or false for any Statecharts component SC and ADEPT component AC.

5.3 Design assumptions In order to precisely characterize the nature of the generated set of rfa and ack events, we

69

make the following two design assumptions: Design Assumption 1 (DA1): Given any rfa event r, time ( r ) ≤ time ( ack ( r ) ) As stated earlier, SC ignores delay-related information. Our assumption states that since AC has further information regarding performance delays, if AC conforms to SC, an ack event produced by the former can occur no earlier that the corresponding rfa event produced by the SC. If ack(r) occurs earlier than r, we assume violation of conformance between AC and SC.

Γ Γ/statechartEvent(r) delayedRfa(r,t)

D [currTime() = t]/r

S statechartEvent(ack(r))

S ack(r)

Figure 5.2 Implementation of delayedRfa(r,t)

Before we make our next design assumption, we introduce a few definitions. In Figure 5.2, we describe a transformation delayedRfa(r, t) that can be applied to a Statecharts. On receiving the stimulus Γ, the Statecharts on the left generates rfa event r and enters the state named S, which is source(r). In the transformation, we introduce an extra state, called D. On receiving the stimulus Γ, the Statecharts enters D and waits in that state until simulation time t has passed since it entered D. It then generates the event r and enters the source(r) state. We define delayedRfa(r, t) as the event r which is generated from the transformed Statecharts. Given two rfa events r1 and r2, we say r1 and r2 have explicit precedence if and only if r1 occurs before r2, and there is no transformation delayedRfa(r1, t), regardless of t, that

70

will change the order of occurrence of these two events. We define explicitPrecedence ( r 1, r 2 ) = ( r 1 precedes r 2 ) and ¬∃

t > time ( r 1 )

( r 2 precedes delayedRfa ( r 1, t ) )

Explicit precedence is specified by ensuring that there exists a dependency in SC such that source(r2) is not entered until source(r1) has been left. We now state our design assumption as: Design Assumption 2 (DA2): Any intended precedence relationship between two output events in the SC must be explicitly specified, i.e., given rfa events r1 and r2, havePrecedence ( r1, r 2 ) iff explicitPrecedence ( r1, r 2 ) This design assumption states that any precedence relationships between the generation of these two events must be specified explicitly rather than implied due to the relative simulation time taken to generate the two events.

5.4 Properties of rfa and ack sequences We now define the following two sets: RFA ( t ) = { r ( time ( ack ( r ) ) > t and ( time ( r ) ≤ t ) ) } ACK ( t ) = { a ( time ( rfa ( a ) ) > t and ( time ( a ) ≤ t ) ) } RFA(t) is the set of generated rfa events by time t and not yet acknowledged. ACK(t) is the set of ack events by time t that are yet to acknowledge their corresponding rfa events Lemma A At any simulation time t, ACK(t) = ∅ This lemma follows from DA1. If a is an element of ACK(t), t < time(rfa(a)). However, DA1 states that time(rfa(a)) < time(a). Since time(a) ≤ t, we have t < time ( rfa ( a ) ) < time ( a ) ≤ t which is clearly a contradiction.

71

Lemma B At any simulation time t, size of RFA(t) is at most number of outputs of SC. Each source state is associated with a unique output port of SC. As mentioned in the introduction, each output port is of SC is associated with exactly one source state. At any time t, SC has as many rfa events as there are active source states. Thus, the maximum number of source states is equal to the number of output ports in a SC with no orthogonal source states, and Lemma B follows.

Lemma C No two elements of RFA(t) have any precedence relationships to be maintained. i.e.; r 1, r 2 ∈ RFA ( t ) only if ¬havePrecedence ( r1, r 2 ) Suppose r1 and r2 are in the RFA(t) and r1 precedes r2. Assume r1 occurred at t1 ≤ t and r2 at t2 ≤ t. Clearly t1 < t2. Change the SC to replace r1 with delayedRfa(r1, τ), for any τ > t2. Now let us consider the effect of this replacement on the production of r2. If r2 is synchronized explicitly with r1, r2 will not be produced until source(r1) is exited. source(r1) is exited if and only if ack(r1) is produced, meaning r1 is acknowledged. Since RFA(t) can contain only those rfa events that have not been acknowledged, we have a contradiction of the assumption that r1 is a member of RFA(t). This implies explicitPrecedence ( r1, r 2 ) is false and therefore havePrecedence ( r 1, r 2 ) is false as well. Lemma D If r1 and r2 have precedence constraints and r1 precedes r2, then r2 cannot occur until r1 is acknowledged, i.e.; havePrecedence ( r1, r 2 ) only if ( time ( ack ( r 1 ) ) < time ( r 2 ) ) From Lemma C, r1 and r2 cannot be in the same RFA(t), for any t. Let time(r1) = t1, and time(r2) = t2, such that r1 first appears in RFA(t1) and such that r2 appears in RFA(t2). Clearly, t1 t and ( time ( r ) ≤ t ) ) }

Lemma E At any simulation time t, RFA = RFA(t). Proof: This lemma is true due to the execution semantics of VHDL. At simulation time t, clearly all the signals have been updated and all processes have been executed. Further, the processes RFAchecker and ACKchecker are executed sequentially. As a result, the variable RFA is modified in an atomic manner, where any rfa event r generated in the first stage of the simulation cycle is first inserted into RFA. Further, if the corresponding ack event ack(r) is generated during the first stage of the simulation cycle at time t, r is removed from RFA when the ACKchecker process is executed. Thus the variable RFA maintains the set of all rfa events that did not have a corresponding ack event produced until the simulation time t. Lemma F At any simulation time t, if conformanceError is false, A(t) conforms to R(t) - RFA. Proof: Let A = A(t), R = R(t) - RFA. From Lemma E, we have that at any simulation time t, the variable RFA in the algorithm DetectConformance represents the set of rfa events RFA(t), i.e., set of rfa events not yet

acknowledged. R represents those rfa events that have been acknowledged until time t. A represents those ack events that have a corresponding rfa event by simulation time t. Let us assume conformanceError is false. We show A conforms to R, by showing all three conformance conditions CC1, CC2 and CC3 are satisfied. We first show CC1 and CC2 are satisfied, i.e., proving that every element of R is acknowledged by an element in A and every element of A has a corresponding rfa event in R. This is trivially true, since all ack events are elements of A, and R is the set of all rfa

75

events that have been acknowledged. To show CC3, we proceed as follows. Suppose there exist two rfa events r1 and r2 in R, that occur respectively at simulation times t1 and t2, havePrecedence(r1, r2) is true. Clearly, t1 < t2, RFA(t1) contains r1, and RFA(t2) contains r2. By Lemmas C and D, RFA(t2) cannot contain r1, implying ack(r1) must have already occurred before r2. Since r2 is acknowledged by ack(r2) subsequently, ack(r1) precedes ack(r2). Hence CC3 is satisfied. Theorem 1 ConformsUntil ( SC, AC, t ) iff ConformanceChecks ( SC, AC ) is true at time t

Proof( if ): ConformsUntil ( SC, AC, t ) if ConformanceChecks ( SC, AC ) is true at time t ConformanceChecks(SC, AC) is true only if RFA is ∅ and conformanceError is false. By lemma F, it follows that A(t) conforms to R(t). From our definition of ConformsUntil(SC,AC,t), the proof follows. Proof( only if ): ConformsUntil ( SC, AC, t ) only if ConformanceChecks ( SC, AC ) is true at time t We claim that if ConformsUntil(SC, AC, t) is true, ConformanceChecks(SC, AC) is also true at time t. To prove our claim, we show that if the events in R(t) and A(t) are made available to the algorithm at the same simulation time as they are generated, ConformanceChecks(SC,AC) will be true at time t if R(t) and A(t) conform. Assume ConformsUntil(SC, AC, t) is true. This implies that R(t) and A(t) conform, by our definition of conformance. Recall, from Section 5.5, that the by the expression “time t”, we imply “the end of simulation cycle at time t”. For ConformanceChecks(SC,AC) to return true, we prove the following two conditions are satisfied: Condition 1. RFA = ∅ at time t Condition 2. conformanceError = false at time t Condition 1 Proof: Since A(t) conforms to R(t), at time t, all elements of R(t) have been acknowledged. As RFA represents the set of rfa events in R(t) not yet acknowledged, RFA will be empty at time

76

t. Condition 2 Proof: We show that if conformanceError = true, ConformsUntil(SC, AC, t) cannot be true. Suppose conformanceError is true. This implies that there is an ack event a such that r = rfa(a) is not an element of RFA at time t. This implies that r could not have been an element of RFA at any time τ ≤ t, since r can be removed from RFA only if a was generated. Thus r must be generated after time t. Therefore we have time ( a ) ≤ t < time ( rfa ( a ) ) This contradicts DA1. Thus conformanceError = false at time t. Since both conditions 1 and 2 are satisfied, ConformanceChecks(SC, AC) is true at time t.

5.7 Orthogonal Sources In this section, we present an algorithm that renames events of the Statecharts corresponding to a Statecharts model in such a way that for any output event, there are no orthogonal sources that can be concurrently active. Recall that two states are orthogonal if the Statecharts model can be in both states at the same time. By eliminating orthogonal sources for a given output event, we make sure that there is exactly one possible state that could have been the source of the transition that produced the output event.

5.8 Algorithm EliminateOrthogonalSources // S is the topmost state of the Statecharts model. // E is a list of names of its output events

procedure EliminateOrthogonalSources(S, E) // Assigns labels to each state in the Statecharts whose topmost state is S. // The labels range from smallestLabel to largestLabel-1

smallestLabel = 0 largestLabel = AssignLabels(S, smallestLabel) for each output event e in E // Each output event e occurring in the action expression of a transition label // is renamed by appending the label of the source state of that transition to e. // outputList returns the set of all new replacing event names that replace e

outputList = RenameActions(S, e)

77

// All occurrence of e in the trigger expressions are replaced by the // the event that is formed by “or”ing all the renamed events of e.

RenameTriggers(S, e, outputList) // e is replaced in the set of output events E by the renamed events

E = E - {e} ∪ outputList // Generates integer labels for sub-states of S such that no two orthogonal sub-states of S // have the same label. Returns highest label value assigned to sub-states of S.

function AssignLabels(S, minLabel): integer maxLabel = minLabel if S is a leaf state

Assign S with minlabel maxLabel = minLabel+1 return maxLabel else for each sub-state Si of S maxLabel = AssignLabels(Si, maxLabel) if S has only mutually-exclusive sub-states

Relabel all sub-states of S with the label of the leftmost sub-state of S mark S with the label of leftmost sub-state of S return maxLabel // Substitute every occurrence of E in expr with E*

procedure Substitute(expr, E, E*) // Return label of state S

function Label(S): integer // Given Statecharts output event e, renames occurrences of e in action expressions // Returns the list of all events that replace occurrences of event e

function RenameActions(S, e): set of strings let renamedList = Ø if S is a source state of e for each transition t that generates e let actionExpr = action expression of transition t let e* = e + ConvertToString(Label(S))) renamedList = renamedList ∪ {e*}

Substitute(actionExpr, e, e*) for each sub-state Si of S renamedList = renamedList ∪ RenameActions(Si, e) return renamedList // Returns a string that has all the elements of eventNameList “or”ed together

function OrCombination(eventNameList): string let orExpr = “” // Empty string for each element eventName of eventNameList orExpr = orExpr + “ or ” + eventName // “+” means concatenate return orExpr // Replaces occurrences of e in trigger expressions with the “or”ed // combination of the events in outputlist

78

procedure RenameTriggers(S, e, newEvents) for each transition t from S that contains the event e in its trigger let triggerExpr = trigger expression of transition t

Substitute(triggerExpr, e, OrCombination(newEvents)) for each sub-state Si of S

RenameTriggers(Si, e, newEvents)

Given a Statecharts S and a list of output Statecharts events, EliminateOrthogonalSources() first calls the AssignLabels(). Function AssignLabels() assigns labels for each

sub-state of S such that no two orthogonal states have the same label. Next, for each output event e in the list E, all the transition labels that contain that output event are modified by EliminateOrthogonalSources() as follows. If the action expression of a transition label con-

tains e, every occurrence of e is replaced by the a new event formed by appending the label of the transition’s source state to e. All such renamed events are then ‘or’ed together to replace occurrences of e in trigger expressions of all transitions. Finally, EliminateOrthogonalSources() modifies the output set by replacing e with all the renamed events. The

semantics of the original Statecharts are preserved, since we neither introduce nor delete any new transitions, and also preserve the dependency relationships between transitions of the original Statecharts. AssignLabels() labels each state in the Statecharts model as follows. For every leaf

state, we assign a unique integer label. A non-leaf state is marked with the label of its leftmost sub-state. If a state is composed of only mutually-exclusive states, then these substates can share the same label, as they are not orthogonal. Each of the mutually-exclusive sub-states are marked with the label of the leftmost sub-state. We now prove certain properties of the Statecharts model generated by EliminateOrthogonalSources().

Property I. No two orthogonal states have the same label. Proof: Suppose we have two states s1 and s2 that are orthogonal. Suppose s is their nearest common ancestor. Clearly, s cannot be either s1 or s2, since that would make one of the states an ancestor of the other. Therefore, s1 and s2 belong to two distinct sub-trees, and suppose these two sub-trees are respectively rooted at states r1 and r2. Clearly, r1 and r2 are immediate sub-states of s.

79

Without loss of generality, suppose r1 is considered by AssignLabels() before r2. Clearly, r1 and r2 cannot be mutually exclusive, since then s1 and s2 will be mutually exclusive. But if r1 and r2 are orthogonal, no descendant of r1(including r1) can have a label in common with a descendant of r2(including r2), since all the sub-states of r2 have labels greater than the highest label assigned to r1. Thus s1 and s2 have different labels. Property 2. For any output event in the transformed Statecharts, there are no orthogonal source states. Proof: From Property 1, we know that every orthogonal state is labeled distinctly. Since every output event is labeled according to its source state, if the source states are orthogonal, the output events will differ in their labels. Property 3. The transformed Statecharts is semantically equivalent to the original Statecharts. Proof: This property is trivially true. EliminateOrthogonalSources() replaces an output event e in the original Statecharts by a set of events. Suppose there are n orthogonal sources for e. Without loss of generality, let us assume the set of symbols {e1,...,en} replace e. Let us call the original Statecharts S and the modified Statecharts S´. The following conditions are true by construction: 1. e occurs in the S iff ei occurs in S´, 1 ≤ i ≤ n. 2. A transition with e in its trigger expression is true iff the corresponding transition in S’ has its trigger expression true. From the two conditions, it is obvious that every transition in S has a corresponding transition in S´, where the former transition is enabled if and only if the latter transition is enabled. Since we do not introduce any new states, every state in S has a corresponding state in S´. It follows that S and S´ are equivalent. Property 4. The maximum number of times that an output event is renamed is bounded by the number of basic states in the original Statecharts. Proof:

80

A new label is introduced only when a basic state is encountered by the function AssignLabels(). Since an output event can be renamed at most as many times as there are

distinct labels, property 4 is proved. Lemma G The size of the acknowledgment sequence RFA(t) is linear in the number of the outputs of original (before applying EliminateOrthogonalSources()) Statecharts model. Proof: Let SC° represent the Statecharts model before applying EliminateOrthogonalSources(). Let SC be the result of applying EliminateOrthogonalSources() on SC°.

Let Output(C) denote the set out output ports of a black-box C. From Lemma B, we have RFA ( t ) ≤ Output ( SC ) . In the algorithm EliminateOrthogonalSources(), that each output outp of SC° gets renamed at most Moutp times, where

Moutp is the number of orthogonal sources of outp. Therefore, RFA ( t ) ≤ Output ( SC ) =



outp ∈ Output ( SC° )

M outp

(1)

However, in a Statecharts model, the maximum number of orthogonal states that the Statecharts can be in is bounded by the number of leaves in the hierarchy tree of the model. Let L represent the number of leaves. The number of states a Statecharts model can be in simultaneously is therefore bound by L. For each output outp in SC°, it follows that there can be at most L active sources at any time t for outp. Since each output outp gets renamed into the maximum number of possible active sources for itself, we have M outp ≤ L, outp ∈ Output ( SC° )

(2)

From (1) and (2), we have RFA ( t ) ≤



L = Output ( SC° ) ⋅ L = O ( Output ( SC° ) )

outp ∈ Output ( SC° )

Theorem 2. At any simulation cycle, algorithm DetectConformance(SC, AC) adds a delay linearly proportional to the number of outputs in the original Statecharts. Further, the algorithm does not interfere with the cosimulation of SC and AC. Proof:

81

At any simulation time t, RFA can have at most O(n) elements, where n is the number of outputs before SC was transformed applying algorithm EliminateOrthogonalSources(). This implies that there is at most an O(n) overhead at each simulation step for conformance checking. Since the algorithm does not generate any event that affects SC or AC, we prove our theorem.

5.9 Summary We presented the formal definition of conformance. A simple algorithm that can check for conformance between a performance-annotated Statecharts model and an ADEPT models has been proposed. The correctness and completeness of the algorithm is proved based on the stated design assumptions. Based on this algorithm, one can partially verify if a proposed implementation indeed conforms to its specification.

82

Chapter 6 Results

Abstract

The effectiveness of our methodology is demonstrated by applying it to the design of a data-communication ring network based on the IEEE 802.5 token-ring specification. Using the methodology, we detected significant errors in specification, implementation, and communication of design intent that occurred during the early stages of our design process. In addition, preliminary performance estimates were obtained during the integrated modeling stage. The integrated-simulation environment, enabling dynamic back-annotation and conformance checking, is the key factor in the success of our methodology. The Statecharts descriptions in this chapter have been simplified for exposition. For complete versions of the Statecharts models, please refer to Appendix C.

6.1 Introduction To demonstrate the scalability and effectiveness of our integrated-simulation based methodology, we applied it to a nontrivial design problem: specification and implementation of the token-ring data-communication protocol. This protocol is based on the ANSI/ IEEE 802.5 standard [IEE85]. The token-ring, with its several components, provided a rich set of situations on which to exercise our methodology and prove its effectiveness. The steps of the methodology were executed manually. The token-ring is a suitable test-bed for the application of our methodology for three important reasons. First, the token-ring represents a typical reactive system. Since our methodology concerns the design of reactive systems, the choice of token-ring network design problem for methodology application is appropriate. Second, the token-ring design problem retains much of its complexity even at a high level of abstraction. The designproblems uncovered by the application of our methodology to the high-level descriptions of the token-ring were therefore complex and nontrivial. Third, we had an independent source [CUT90] for proposed implementations. We could thus emulate realistic design scenarios where the specification and implementations are typically developed by different

83

groups of people. An independent source of implementation ensured that the errors and inconsistencies in the implementation were not introduced explicitly by us. The reactive nature of token-ring is evident in the manner in which it interacts with its environment. Requests for data transmission can arrive at potentially any time, at a pace depending on the network’s environment. During heavy network traffic, data may need to be retransmitted. Errors in the stations can potentially occur at any time. In addition, the internal data queues may become full, causing ignored transmission requests and failed transmission deliveries. All these factors make the interaction of token-ring network with its environment highly complex. Describing the behavior of the token-ring is a nontrivial task. This is demonstrated by the fact that the IEEE 802.5 standard consists of approximately thirty pages of description of protocol and data format descriptions. The second factor that led to the choice of token-ring is the considerable complexity of the token-ring specification at even high levels of abstractions. During early design stages, designers prefer to work with high-level abstractions of the system under design. Since our methodology is concerned with early design-stages, we considered a very high-level interpretation of the IEEE 802.5 standard. The resulting specification is of considerable complexity, since we still need to describe the interaction of the token-ring network with its environment. The third factor leading to our choice of token-ring was the availability of an independently developed performance model of an implementation of the token-ring. To make sure that we did not contrive our examples, we developed our token-ring performance model based on the thesis of Eric Cutright [CUT91]. In that thesis, a performance model of the IBM token-ring LAN was developed using an uninterpreted-modeling methodology [AYL92]. Since the IBM implementation of the token-ring LAN is compatible with the IEEE token-ring specification [STR87], an ADEPT version of the IBM implementation was developed [REV93, SOR92]. During the application of our methodology, we considered the previously developed models of the token-ring components as the performance models of our proposed implementations. Using an independently developed performance model resulted in a more realistic design scenario for the application of our methodology. The application of our integrated-simulation based methodology generated several interesting results. In the remainder of the chapter, we describe how we were able to detect

84

inconsistencies between Statecharts and ADEPT models. We were also able to conveniently obtain early performance estimates and resource bounds of the system that would not have been convenient to obtain otherwise. These examples demonstrate the effectiveness of our integrated simulation methodology for nontrivial design problems. The chapter is organized as follows. Section 6.2 briefly describes the token-ring, with a very high-level description of the Statecharts and the ADEPT models. Section 6.3 describes the results of application of our integrated-simulation approach. We conclude this chapter in Section 6.4.

6.2 A brief overview of Token-ring A token-ring is a data network that is intended for use in commercial and light-industrial environments. A token-ring network is based on a ring topology (Figure 6.1). The Station

2

5

1

Direction of token flow

3

4

Figure 6.1 Token-ring configuration with five stations.

token-ring network is best described by the following excerpts from Chapter 2, General Description, ANSI/IEEE Standard 802.5-1985 [IEE88]: A token-ring consists of set of stations serially connected by a transmission medium. Information is transferred sequentially, bit by bit, from one active station to the next. Each station generally regenerates and repeats each bit and serves as the means for attaching one or more devices (terminals, workstations) to the ring for the purpose of communicating with other devices on the network. A given station (the one that has access to the medium) transfers information onto the ring, where the

85

information circulates from one station to the next. The addressed destination station(s) copies the information as it passes. Finally, the station that transmitted the information effectively removes the information from the ring. A station gains the right to transmit its information onto the medium when it detects a token passing on the medium. The token is a control signal comprised of a unique signalling sequence that circulates on the medium following each information transfer. Any station, upon detection of an appropriate token, may capture the token by modifying it to a start-of-frame sequence and appending appropriate control and status fields, address fields, information field, frame-check sequence, and the end-offrame sequence. At the completion of its information transfer and after appropriate checking for proper operation, the station initiates a new token, which provides other stations the opportunity to gain access to the ring. Error detection and recovery mechanisms are provided to restore network operation in the event that transmission errors or medium transients (for example, those resulting from station insertion or removal) cause the access method to deviate from normal operation. Detection and recovery for these cases utilize a network monitoring function that is performed in a specific station with backup capability in all other stations that are attached to the ring. The token-ring communication protocol comprises three components: token-access control protocol, network-addressing scheme, and the token-monitor function. The token-access control protocol regulates data flow in the ring topology. The protocol is based on the principle that permission to use the communications link is passed sequentially from node to node around the ring. A single token circulates on the ring. Each node, in turn, gets an opportunity to transmit data when it receives the token. A station having to transmit the data can capture the token, change the token status to indicate its data-transmission mode, and begin data transmission. The receiving station copies the data and marks the token to indicate its acknowledgment. When the token with acknowledgment arrives at the sending station, the token is released by the sender station indicating the availability of the token for further data transmissions by other stations. In the network addressing scheme, a unique address distinguishes each station from all the other stations on the token-ring network. When a token arrives at an active station, the station examines the destination address field of the token and copies the data if the node address matches the destination address. The token-monitor function ensures normal token operation. A token-monitor is always active in a single station on a token-ring. If normal token operation is disrupted, the monitor

86

initiates an error recovery procedure. The disruption of normal token-ring operation can occur in two ways; either due to loss of token or due to the continuous circulation of token. The loss of token is detected by a timer function. The timer function generates time-outs when no token is received within a predetermined delay since the last token received by the station with the active monitor. Since we are primarily concerned with early stages of the design, we have considered very high-level descriptions of the token-ring. Details regarding the exact token format, priority resolution, token holding times, etc. are not considered in this dissertation. For a complete description of the token-ring protocol and its implementation, see [IEE87, IEE85].

6.2.1 Statecharts description of the token-ring We have developed a Statecharts version of the IEEE 802.5 token-ring network specification. Figure 6.2 presents a top-level Statecharts description of the token-ring network. TOKEN_RING

@NODE_1

@NODE_2

@NODE_3

@NODE_4

@NODE_5

@TOKEN_FRAME

(a)

@NAME implies that the state NAME is

described in another chart. NODE_1 PROTOCOL @PROCESSOR_STATUS_1 @WATCHDOG_TIMER_1 @MONITOR_1

(b) Figure 6.2 A top-level Statecharts description of the token-ring. Figure 6.2(a) depicts a top level view of the Statecharts description of the token-ring. The description is partitioned into two parts: status of the individual stations and the status of the circulating token. The example consists of five stations, where each station

X

is

described by the state NODE_X. In this case, X ranges from 1 to 5. The status of the circulat-

87

ing token is described by the state TOKEN_FRAME. Each station has a timer component to generate time-outs, a monitor component to provide error detection and recovery, and a processor-status component to maintain the transmission requests and deliveries appearing at the station from an associated processor. For a station X, the status of these components are respectively described by the following substates of

NODE_X: WATCHDOG_TIMER_X, MONITOR_X and PROCESSOR_STATUS_X

(Figure 6.2

(b)). The above-mentioned sub-states are composed of further sub-states. However, these sub-states are not displayed in Figure 6.2. To view a complete description of the states in the Statecharts description of the token-ring see [REV94]. In the remainder of this chapter, the figures containing Statecharts descriptions are simplified for the sake of clarity of illustration.

6.2.2 ADEPT model of the token-ring Before describing the ADEPT model of the token-ring, we state a notational convention that will be followed in the remainder of this chapter. In the context of a model developed in ADEPT, we will use the italicized word “token” to indicate a performance modeling entity that represents the flow of information in a model developed using ADEPT. The nonitalicized word “token” will represent the circulating token in a token-ring network. Figure 6.3 provides a top-level description of a completed ADEPT model of a tokenStatus of associated processor

Token exits here

Token enters here

Figure 6.3 A top-level ADEPT description of a token-ring station. ring network. A single station of an ADEPT model has three components: WATCHDOG_TIMER, MONITOR, and the NODE_PROTOCOL. The token enters the station via

88

the WATCHDOG_TIMER, passes through the MONITOR and exits back into the transmission medium from the NODE_PROTOCOL component. Each of these components are hierarchically composed of various subcomponents. The ports labeled RCV and TR are used to receive data and transmit requests respectively. For a complete description, see [REV94]. Notice that the ADEPT model is not an implementation in itself. Instead, the model synthesizes the flow of information in a proposed implementation. However, mismatches observed between the behavior of the ADEPT model and the corresponding Statecharts model of a given component will indicate an inconsistency between the specification and implementation. The correspondences between the ADEPT components and the Statecharts sub-states are as follows. The

WATCHDOG_TIMER

respectively correspond with the

and

MONITOR

WATCHDOG_TIMER

components in the ADEPT model

and

MONITOR

states in the Statecharts

model. The PROCESSOR_STATUS state is determined by the RCV and TR ports of the ADEPT model. The internal structure of the ADEPT components and the analogous Statecharts states are quite dissimilar. For example, one cannot directly map the sub-states in the Statecharts description of the

WATCHDOG_TIMER

to the components of the corresponding ADEPT

model, as is apparent in Figure 6.4. Structural dissimilarities in the Statecharts and ADEPT models make the combined task of making the models interact and checking the conformance between them a formidable challenge. In spite of the dissimilarities, our methodology overcomes this challenge by making the Statecharts and ADEPT models interact. In the following section, we present several examples, where model interaction led to the detection of several design inconsistencies and to the prediction of performance metrics even before the complete performance model was developed.

6.3 Examples The examples presented in this section were generated due to the application of our integrated-simulation based methodology to incrementally develop a performance model of the token-ring. These examples demonstrate how integrated simulation helps to uncover

89

TIMER_1 WATCH_TM_OFF

en(MON:ACT)

en(MON:INA)

tm(TOKEN_ARRIVED,100) / TIME_OUT WATCH_TM_ON

(a)

Arrival of a token here represents a time-out

(b) Figure 6.4 Watchdog timer described (a) using Statecharts, and (b) using ADEPT. design inconsistencies and to obtain preliminary performance metrics with partially developed performance models. In each example that follows, we first briefly describe the result obtained. Next we describe the components involved in the integrated simulation and then briefly indicate how the integration between the components was achieved. The operational scenario that generated the results is described next. Finally, we describe the ramifications of the results obtained.

Labeling conventions used in Figures 6.4-6.14 For the sake of clarity and exposition, both Statecharts and ADEPT models have been simplified. In addition, the following labeling convention have been followed: • Most Statecharts labels use the Helvetica font. For example, see WATCH_TM_ON state in Figure 6.4. The exceptions are some state names in the performance annotated Statecharts.

90

• In performance-annotated Statecharts, the names of the original states are in bold Helvetica font. For example, see WATCH_TM_ON state in Figure 6.6(a). • Comments are written in bold Times font. See the comment in Figure 6.4 (b). • ADEPT entities are referred to in the comments in italic Times font. See the comment in Figure 6.4 (b).

6.3.1 Test-bench: Performance estimates from Statecharts We were able to generate very early performance estimates from the Statecharts specification without having tied down the specification to any implementation. In order to drive the simulation of the token-ring, we first developed a test-bench in ADEPT. The purpose of the test-bench was to create an operational scenario for executing the token-ring protocol. This operational scenarios were created for the purpose of testing and generating performance metrics such as throughput, loss-of-tokens, latencies etc. This test-bench consists of five instantiations of the ADEPT component called NODE_TEST,

one for each station (Figure 6.5). Each

NODE_TEST

generates the operational

scenario for its corresponding station by reading parameters such as inter-arrival times, errors, requests for transmission of data, etc., from an external data file. NODE_TEST consequently generates simulation stimuli in the form of coloring, placement and removal of ADEPT tokens (different from the circulating “token” in the token-ring) on its output port. Using integrated simulation, these ADEPT token-related activities were interpreted by the Statecharts model of the entire token-ring. The statistics obtained gave an estimation of the effect of network load on the network throughput. Notice that these estimates were obtained in the absence of any performance model of the token-ring. This is an example of complementary modeling, where disjoint aspects of the system was modeled by different modeling environments: Statecharts modeled the token-ring protocol whereas ADEPT modeled the environment in which the protocol is tested.

6.3.2 Watchdog timer: Counterintuitive semantics of Statecharts In this example, we show how integrated simulation of the ADEPT model with the Statecharts model effectively uncovered an error in the Statecharts specification. This error

91

Figure 6.5 Components of the test-bench developed in ADEPT. arose due to the counterintuitive semantics of the tm() (abbreviation for time-out) operator in the language of Statecharts. According to the documentation in [ILO92], the event

tm(E, N)

occurs when N clock

units have passed since the last occurrence of event E. Clearly, once event E occurs, if the next E does not occur within N clock units, tm(E, N) event will be generated. Failure of intuition occurs when one considers the beginning of a simulation session. Intuition dictates that in the absence of event E, the tm(E, N) should occur at time N, since no event E has occurred in the last N clock units. However, though it was not explicitly stated, the Statecharts simulation environment implicitly assumes that the event E and therefore the event tm(E, N) occurred in the past infinitely long ago before the simulation session started. This assumption leads to the observation that in the absence of event E, corresponding event tm(E, N)

will not occur at time N, contradicting our intuition.

The ADEPT model of the watchdog timer component followed the intuitive approach in implementing the time-out, as opposed to the corresponding Statecharts model, which

92

followed the counterintuitive approach. This led to a mismatch in the outputs of the two models. The relevant portion of the performance-annotated version of the original Statecharts specification is shown in Figure 6.6. The timer can exist in one of two mutually exclusive

WATCH_TM_ON

occurs when TIME_OUT_ACK is generated by integrated-simulation environment en(MON_INA)

en(MON_ACT) en(WATCH_TM_ON_TO_SYNCED) / TIME_OUT

State in which the watchdog timer waits for the ADEPT model to generate analogous time-out event.

WATCH_TM_ON WATCH_TM_ON_AUX

TIME_OUT event generated

tm(TOKEN_ARRIVED, 100) / TIME_OUT_RFA WATCH_TM_ON_WAIT

(a) WATCH_TM_ON_TO_HANDLER

WATCH_TM_ON_TO_SYNC_WAIT

TIME_OUT_ACK

TIME_OUT_RFA TIME_OUT_ACK / TIME_OUT_ERR

WATCH_TM_ON_TO_SYNCED WATCH_TM_ON_TO_ERROR

event generated by the integrated-simulation environment when the ADEPT model of the watchdog timer generates the analogous time-out event.

(b) Figure 6.6 Watchdog timer (a) Timer behavior (b) Handler for TIME_OUT_RFA and TIME_OUT_ACK

states: WATCH_TM_OFF and WATCH_TM_ON, depending on whether the associated monitor of the station is inactive or active respectively. On entering

WATCH_TM_ON

state, the timer

defaults to WATCH_TM_ON_AUX state. If the TOKEN_ARRIVED event does not occur within 100 clock units since the last

TOKEN_ARRIVED

occurred,

WATCH_TM_ON_WAIT

state is entered

after generating a TIME_OUT_RFA event. Generation of TIME_OUT_RFA indicates that according to the Statecharts model a time-out should be generated by the performance model. Under normal operation, when the performance model generates the corresponding

93

time-out event, the integrated simulation environment generates the TIME_OUT_ACK event. Following the Statecharts presented in Figure 6.6 (b), when WATCH_TM_ON_TO_SYNCED

TIME_OUT_ACK

is generated,

will be entered. Entry to this state causes the Statecharts to

eventually generate the TIME_OUT event. Analogous to the Statecharts model, the ADEPT model (Figure 6.7) operates as fol-

A token is placed here to indicate a time-out.

Figure 6.7 A top-level ADEPT model of the Watchdog timer

lows. The arrival of token is indicated when a token arrives at the wt_token_in port of the watchdog_timer module. By placing an ADEPT token on the wt_timeout port, a time-out

event is signalled. When a token is placed on the wt_timeout port, the environment generates the event TIME_OUT_ACK to be consumed by the Statecharts model. If the event

TIME_OUT_ACK

arrives before the event

TIME_OUT_RFA

is generated, the

event TIME_OUT_ERR is generated signifying a mismatch between the two models, since the Statecharts model was clearly not expecting a time-out. The error was detected in the following operational scenario. The token-ring is self-initializing. Given a monitor that is active by default, in the absence of a token arrival, watchdog timer component will generate a time-out event. This time-out event is then detected by the monitor which then generates the token for circulation in the token-ring. The ADEPT model produced the time-out, thus generating the

TIME_OUT_ACK

event.

However, owing to the semantics of the tm() operator in the Statecharts, the event TIME_OUT was not generated. The event TIME_OUT is generated only if tm(TOKEN_ARRIVED, 100) event occurs. However, as the 100)

TOKEN_ARRIVED

did not occur, and therefore

event had not yet occurred,

TIME_OUT_RFA

tm(TOKEN_ARRIVED,

did not occur either. Occurrence of

94

TIME_OUT_ACK

without the corresponding occurrence of

TIME_OUT_RFA

precipitated the

generation of TIME_OUT_ERR, which flagged a mismatch between the two models. To fix this error,

tm(TOKEN_ARRIVED,100)

was replaced by

tm(en(WATCH_TM_ON),100)

in

WATCH_TM_OFF

en(MON_INA)

en(MON_ACT) en(WATCH_TM_ON_TO_SYNCED) / TIME_OUT

TOKEN_ARRIVED WATCH_TM_ON_AUX

WATCH_TM_ON

tm(en(WATCH_TM_ON), 100) / TIME_OUT_RFA

Changes compared to Figure 6.6 (a)

WATCH_TM_ON_WAIT

Figure 6.8 Watchdog timer: Corrected version of the Statecharts model in 6.6 (a). Figure 6.8. This would result in the generation of the

TIME_OUT

event at time

N,

in the

absence of the occurrence of TOKEN_ARRIVED event before time N. In summary, we were able to detect an error in the specification due to an inconsistency in the behaviors of the Statecharts and ADEPT models. The inconsistency can be ascribed to the counterintuitive execution semantics of the tm() operator.

6.3.3 Monitor: Incorrect component instantiation in ADEPT We now present an error that occurred due to an oversight on part of the designer. During a version upgrade of the ADEPT software, all the modules used in ADEPT had to be upgraded with their newer versions. Unfortunately, one instance of the decider module was incorrectly updated, i.e., its parameters somehow were copied incorrectly into the newer version. The ADEPT module in question is indicated by an arrow in Figure 6.9. The module is a decider module, that directs the token flow to one of its output ports based on its parameters. We now briefly describe the relevant portions of the ADEPT and Statecharts models of the monitor. For correct operation, the value of the field parameter should have been set to tag2. The

95

Incorrectly instantiated decider module

ADEPT followed this path Statecharts followed a path analogous to this one

Figure 6.9 ADEPT model of monitor. The faulty module is indicated by the arrow. semantics associated with the indicated decider module is as follows. When a token enters the decider ’s input port, the decider module checks if the token is free or busy. The status of the token is determined by checking its tag2 field. If tag2 is set to 0, the token is free, else it is busy. If the token is free, it is sent down the wire labeled MA_NO_ERROR_FREE. If the token is found busy instead, it is routed via the MONITOR_COUNT module to the wire labeled MA_NO_ERROR_NO_FREE. In Figure 6.9, we present the portion of the performance annotated Statecharts model that models analogous behavior. Event

TOKEN_ARRIVED

indicates the arrival of the token,

and the monitor enters CHK_TOK_FREE state, where it checks if the token is free. If the token is free, i.e., the condition [in(TOK_FREE)] is true, the event C_T_F_TOKEN_READY_RFA is generated, implying that the Statecharts is now expecting the event C_T_F_TOKEN_READY_ACK.

96

ADEPT followed path analogous to this one

occurs when C_T_F_TOKEN_READY_ACK is generated by the integrated-simulation environment en(CHK_TOK_FREE_TR_SYNCED) / TOKEN_READY

TOKEN_ARRIVED

occurs when C_C_TOKEN_READY_R_ACK is generated by the integrated-simulation environment

CHK_TOK_FREE CHK_TOK_FREE_WAIT

CHK_TOK_FREE_AUX

en(CHK_COUNT_TR_R_SYNCED) / RESET_TOKEN; TOKEN_READY occurs when C_C_TOKEN_READY_R_ACK

is generated by the integrated-simulation environment

en(TOK_FREE) / C_T_F_TOKEN_READY_RFA

Statecharts followed this path, resulting in generation of C_C_TOKEN_READY_ERR as C_C_TOKEN_READY_ACK occurred before C_C_TOKEN_READY_RFA

en(CHK_COUNT_TR_SYNCED) / TOKEN_READY CHK_COUNT_WAIT [COUNTCOUNT_MAX] / C_C_TOKEN_READY_R_RFA CHK_COUNT

Figure 6.10 Statecharts model of monitor Conversely, if

[in(TOK_FREE)]

generates either

is false, the Statecharts model enters

C_C_TOKEN_READY_R_RFA

or

CHK_COUNT

C_C_TOKEN_READY_RFA,

state, and

depending on

whether the token is reset or not. During integrated simulation, the correlations between the two models is explicitly specified by the designer so that the following occur. On arrival of a token on MA_NO_ERROR_FREE,

the

integrated

environment

generates

the

event

C_T_F_TOKEN_READY_ACK. On arrival of a token on MON_NO_ERROR_NO_FREE, either the

event

C_C_TOKEN_READY_R_ACK

or the event

C_C_TOKEN_READY_ACK

is generated by the

integrated environment, depending on whether the token is reset or not. As mentioned in Chapter 5, if any of these ack events occurs before its corresponding rfa event, the corresponding error event is generated. For example, if the event C_C_TOKEN_READY_ACK is generated when no

C_C_TOKEN_READY_RFA

has occurred,

C_C_TOKEN_READY_ERR

will be

generated. We now describe how the error was discovered. In Figure 6.9, we indicate the erroneously copied module by an arrow. The value of the field is set to tag1. It should have been set to tag2 instead. As a consequence, the retrial limit is not checked, causing the station to retransmit as long as it takes to successfully transmit the token, which is in direct violation of the original specification. Using our integrated simulation, we were able to catch this error during the following

97

scenario. The Statecharts had generated the event

C_C_TOKEN_READY_RFA,

implying that

the token was busy, but did not need to be reset. Under the same conditions, the ADEPT model failed to send the token via MONITOR_COUNT module, and placed the token on MA_NO_ERROR_FREE, finally generating the event C_T_F_TOKEN_READY_ACK. As the

event C_T_F_TOKEN_READY_RFA had not been generated yet, an error was indicated by the generation of the event C_T_F_TOKEN_READY_ERR. To catch this error without integrated simulation, the designer had to specifically look for this error. Further, there would have been no performance-annotated Statecharts version to compare the ADEPT behavior with a behavior dictated by the specification. The designer would have been forced to trace all related outputs carefully. Integrated simulation automated this process of error detection.

6.3.4 Node protocol: Unanticipated scenario encountered We demonstrate how integrated-simulation approach led to the discovery of a design scenario that was not accounted for in the Statecharts model. Specifically, the Statecharts failed to anticipate the arrival of a data transmission request while the circulating token was at the active station and still being considered for the data transmission. This caused a mismatch in the behavior of the Statecharts and ADEPT models. The mismatch occurred because both models interpreted the arrival of data-transmission request differently. In Figure 6.11 (a), the relevant portion of the Statecharts is presented. Suppose the station is in WAIT state and an event TOKEN_READY occurs, indicating the arrival of a token. If, at this point, the token is error-free and ready to transmit data, the station will enter the CHECK_ERROR

state. An entry to the state CHECK_FREE occurs next. In CHECK_FREE state,

the station checks if there is data available to be sent with the token. The need to send is determined by the condition [in(NEED_TO_SEND)], which should be true if there is data waiting to be sent across the ring from this station. If the condition is found true, the model generates the event

GET_TOKEN_FROM_PROC_RFA,

indicating it is ready to transmit and is

waiting for acknowledgment from its ADEPT counterpart. If the condition is found false, the model predicts that it would send the token over the ring directly and generates PUT_TOKEN_ON_RING_RFA.

The inconsistency between the behaviors of the two models arises when condition

98

en(SEND_PTOR_1_SYNCED) / PUT_TOKEN_ON_RING

WAIT

Statecharts specification does not anticipate [in (NEED_TO_SEND)] to become true in between these transitions.

TOKEN_READY

CHECK_ERROR

Statecharts predicted this path CHECK_FREE

CHECK_FREE_AUX

[in(NEED_TO_SEND)] / GET_TOKEN_FROM_PROC_RFA

[not in(NEED_TO_SEND)]

CHECK_FREE_WAIT

Statecharts did not generate this event

ADEPT predicted this path en(CHECK_FREE_GTFP_SYNCED) / GET_TOKEN_FROM_PROC SEND_WAIT SEND_AUX / PUT_TOKEN_ON_RING_RFA

SEND

Note: Statecharts did not generate event GET_TOKEN_FROM_PROC_RFA. Event GET_TOKEN_FROM_PROC_ACK was generated by the integrated-simulation environment, subsequently generating the event GET_TOKEN_FROM_PROC_ERR

(a)

TR

may receive a token while the token is still traveling between these two points

(b) Figure 6.11 (a) Unanticipated scenario in Statecharts specification of Node protocol (b) ADEPT model of a possible interpretation of (a) [in(NEED_TO_SEND)]

changes

between the instant the event TOKEN_READY occurs and the

instant the condition [in(NEED_TO_SEND)] is examined. During the execution of Statecharts

99

alone, these transitions occur in zero time, and therefore the likelihood of data transmission request to arrive during the execution of these transitions would have been slim. By incorporating functional timing from ADEPT, the same transitions took longer to execute, and therefore increased the chance of the occurrence of the unanticipated design scenario during their execution. We now explain the scenario that led to the discovery of this error. The Statecharts model started at the WAIT state, shown in Figure 6.11 (a). During an execution session of the integrated model, when the TOKEN_READY event occurred, the condition [in(NEED_TO_SEND)] was false. This caused the Statecharts to predict that there would be no transmission of data and waited in the SEND_WAIT state for the ADEPT model to reach an analogous state. The time taken between the arrival of TOKEN_READY and the entry to SEND_WAIT was zero. For the ADEPT model, the transmission request arrived after the circulating token arrived and before the model made a decision regarding transmission of data. The ADEPT model therefore generated a token with the data and caused the integrated environment to generate the GET_TOKEN_FROM_PROC_ACK event.

event was not generated, event

As the corresponding

GET_TOKEN_FROM_PROC_RFA

GET_TOKEN_FROM_PROC_ERR

was generated, indicating a

mismatch in the model behaviors. The scenario was resolved by explicitly specifying the functionality of the protocol regarding its response to requests that arrive after CHECK_FREE

TOKEN_READY

and before entering

state. In the corrected specification, (Figure 6.12 (a)), we made it explicit that

the station only consider value of the condition that was at the instant

TOKEN_READY

occurred. This value is stored in the condition variable n2s. The ADEPT model (Figure 6.12 (b)) also had to make changes, as depicted by the introduction of the SWITCH module and the RC (read color) module, that ensured that only those the data transmission requests got considered that were present at the time the token arrived at the receiver_link module in 6.12 (b). This example presents a case that would have been hard to anticipate with simple execution of the Statecharts model or the performance model alone. Either model is consistent in itself, but contradicts the other model in its response to the environment.

100

en(SEND_PTOR_1_SYNCED) / PUT_TOKEN_ON_RING

WAIT

TOKEN_READY / n2s:= [in(NEED_TO_SEND)]

CHECK_ERROR

Modified to consider only those data requests that are available at the time the token arrives at station CHECK_FREE

CHECK_FREE_AUX [n2s] / GET_TOKEN_FROM_PROC_RFA

[not n2s)]

CHECK_FREE_WAIT

en(CHECK_FREE_GTFP_SYNCED) / GET_TOKEN_FROM_PROC SEND

SEND_WAIT SEND_AUX / PUT_TOKEN_ON_RING_RFA

(a)

Introduced to consider only those data requests that are available at the time the token arrives at station

(b) Figure 6.12 Ambiguity removed from Node_protocol. (a) Non-ambiguous Statecharts specification of Node protocol (b) ADEPT model of an implementation of (a)

101

6.3.5 Protocol specification: Deviation from Statecharts semantics We present an example of a significant miscommunication of designer intent that was caught by our integrated simulation methodology. The misinterpretation of the specification (Figure 6.13) arises as follows. Once a station transmits a data and receives the token en(SEND_PTOR_SYNCED) / PUT_TOKEN_ON_RING [not in(TOK_FREE)]

SEND_WAIT

SEND

/PUT_TOKEN_ON_RING_RFA SEND_AUX

Causes the token to be released for data transmission by other stations

en(CHECK_SENDER_RT_SYNCED) / RESET_TOKEN

CHECK_ADDRESS

[TOKEN_FROM_ADDRESS/=NODE_ADDRESS] CHECK_SENDER_WAIT [TOKEN_FROM_ADDRESS=NODE_ADDRESS and in(TOK_ACK)] / RESET_TOKEN_RFA [TOKEN_TO_ADDRESS/=NODE_ADDRESS]

CHECK_SENDER_AUX CHECK_SENDER

Figure 6.13 Statecharts model of protocol specification showing that sender should not retransmitted token with data immediately after it is with an acknowledgment, it frees the token and passes it on to the next station. The ADEPT model, as it was discovered later, deviated from this behavior. When a token arrived at the station with an acknowledgment for the last message sent by the station, the ADEPT model of the station reset the corresponding token and sent it to the transmitter module. Since the token was reset, the token status was free. The transmitter module would then send further data with that token, provided there was more data to be sent. This is incorrect behavior when compared to the specification, as sending data from the station in this manner will lead to unfairness in ring access. To eliminate this error, we added a token bypass mechanism to the ADEPT model such that the reset token is directly put on the ring, bypassing the transmitter. The top level of the modified node_protocol ADEPT model is shown in Figure 6.14.

6.3.6 Node_protocol: Estimating queue size This example demonstrates how we were able to estimate the maximum size of a queue

102

c5 Transmission queue: requests for data transmissions are stored here.

Token bypass mechanism added so that the token is not considered for data transmission twice in a row by the same station

Figure 6.14 ADEPT model of the correctly implemented node protocol needed to store requests for data transmissions. The estimation of a queue was not possible from the Statecharts specification alone, since the Statecharts does not explicitly model the queue. Neither is it convenient to estimate the size of the queue from the ADEPT model without developing it completely. If the node_protocol was tested in isolation, it would have been hard to generate realistic input test parameters. By providing an environment where node_protocol interacts with rest of the system, integrated simulation offered a convenient

way to estimate the needed size of a transmission queue. To obtain the queue size, we used the node_protocol module. The transmission queue, marked c5 in Figure 6.14 was set to a large capacity. This capacity was chosen to be a large number, which we were certain would definitely be an upper bound on the queue size. Integrated simulation was performed next, and the length of the c5 queue was monitored. The largest queue length observed gave a reasonable estimate of the queue size. For further examples of how integrated simulation generated preliminary performance estimates, see [SSW92].

103

6.4 Conclusions Several design errors were discovered by applying our integrated-simulation approach to the preliminary design stages of the token-ring data-communication network. The sources of these errors were counterintuitive specification semantics, designer oversight, specification ambiguity, and misinterpreted design intent. The diversity in the range of sources of errors detected demonstrates how our methodology effectively uncovers a wide class of errors. In addition to these design errors, we were able to obtain preliminary estimates of system performance and resource requirements without developing the entire performance model. The estimates were obtained by evaluating the proposed implementation of a component in the context of the entire system instead of evaluating the implementation in isolation. Integrated simulation allowed us to execute the performance model of the systemcomponent under design as if it was interacting with the rest of the system, regardless of the status of the system’s implementation. The errors detected and performance estimated above might have been obtained independently without applying the integrated simulation approach. However, we are not aware of any other approaches that obtain such results in a reasonably efficient and practical manner. Using integrated simulation, we were able to detect errors that we believe would have otherwise propagated to lower-level design stages, or would have been left undetected. Integrated simulation also helped to obtain preliminary performance and resource requirement predictions, which are potentially useful for guiding the designers in selecting their implementation alternatives.

6.5 Summary We applied our integrated-simulation based methodology to the design of a network based on IEEE 802.5 token-ring specification. The token-ring provided a nontrivial testbed to demonstrate the effectiveness our integrated-simulation based approach. We were able to obtain several interesting results, which can be grouped into two major categories. In the first category, a number of design errors were detected, which covered both specifi-

104

cation and implementation errors. In the second category, preliminary performance estimates were obtained using partially developed performance models, which may not have been feasibly obtained without integrated simulation. These experimental results effectively demonstrate the effectiveness of our methodology in designing reactive systems.

105

Chapter 7 Summary, Conclusions and Future Work

Abstract

We summarize our approach to support model-continuity during early stages of reactive-system design. As identified earlier, two important, early, but dissimilar design-stages are operational specification and performance modeling. We show, in retrospect, how our integrated-simulation based approach was successful in reconciling the dissimilarities between these two design stages. We discuss how this success is validated by our experiments. In these experiments, we detected inconsistencies between the supposedly corresponding models and obtained performance estimates with partially-developed performance models. Finally, we point out several interesting extensions of our work.

7.1 Introduction A review of the state of art in the design of digital systems had indicated that model continuity is not adequately supported in the early stages of digital-system design. We have addressed this problem by developing a design methodology that supports model continuity during early design stages of reactive systems: operational-specification modeling and performance modeling. To support model continuity, we identified the following underlying differences in the two design stages: • Differences in modeling domains and underlying formalisms, and, • Absence of functional timings in operational specifications. The problem of supporting model continuity is thus reduced to the development of a methodology that reconciles these differences. We propose integrated simulation of the two models as a means for such reconciliation. The primary advantages of integrated simulation are: • Evaluation of a proposed implementation of a component against its own specification, • Evaluation of a proposed implementation of a component in the context of other com-

106

ponents of the system under design, and, • Evaluation of a proposed implementation’s impact on the entire system under design. We demonstrate the feasibility and effectiveness of our methodology using a rich set of examples. We were able to discover errors and suggest improvements that would have made a significant impact on the cost, the time-to-market and the robustness of the final product, had our approach been applied to real life. We demonstrate that the additional designer effort required to apply our methodology is minimal and that it scales well with the complexity of the system under design. In the rest of the chapter, we present our conclusions in some detail. In Section 7.2, we summarize the lessons learned while addressing the theoretical and implementation-related concerns of developing an integrated-simulation based approach. In Section 7.3, we discuss lessons learned from the various examples, followed by the highlights of the methodology in Section 7.4. We then suggest several interesting extensions of our work in Section 7.5.

7.2 Research results There were two primary challenges that we encountered in order to implement integrated simulation of a Statecharts and ADEPT model: 1. How to reconcile the differences in the modeling environments of Statecharts and ADEPT so that they can communicate with each other, and 2. How to incorporate functional timing from the ADEPT model into Statecharts without compromising the conceptual level of the original Statecharts specification. The first challenge stems from the difference in the corresponding modeling domains of Statecharts and ADEPT: the behavioral and the performance domains. Between Statecharts and ADEPT models, this difference is mainly due to the different formalisms adopted by the two corresponding modeling environments: the former is based on a statemachine based formalism, whereas the latter is based on a Petri-net based formalism. This difference in formalisms presents the first challenge of implementing integrated simulation: how to correlate activities in Statecharts model with analogous activities in the ADEPT model and vice-versa. To achieve this correlation, we used the VHDL environment to simulate these two different models under a common umbrella, using the VHDL language fea-

107

tures to communicate between the two models. To shield the designer from low-level VHDL details, we show how most of the steps in integrated simulation can be automated. As a result, the designer operates at the conceptual level of Statecharts and ADEPT, while the simulation environment takes care of the low-level VHDL details to implement the integrated simulation. The second challenge of implementing integrated simulation arises due the absence of timing information in the Statecharts model. Absence of functional-timing is generally unavoidable in a Statecharts, due to lack of available timing information during the specification stage and the assumption of perfect synchrony [BER91]. We explained how absence of functional timing can be a potential source of ambiguity in the specification models for reactive systems. If these ambiguities remain undiscovered, they may lead to misinterpretation of the specifier intent. Such ambiguities have a better chance of being discovered if the specification model is executed with the functional timing information added to it. There are two alternatives for incorporating functional timings, a direct or an indirect approach. The direct approach in the context of Statecharts is to add delays to the Statecharts transitions. Such additions to the Statecharts model will make it implementation dependent; an highly undesirable property for a specification. We therefore use an indirect and dynamic approach based on integrated simulation. In this approach, the Statecharts is modified automatically, using the technique called performance annotation. Performance annotation preserves implementation-independence of the original model, and preserves the original structure of the specification. The modified Statecharts model can dynamically incorporate functional-timing information from a concurrently-executing performance model. While our approach is specific to the Statecharts, the general principle of integrated simulation and performance annotation should be applicable to a wide class of systems, since Statecharts and ADEPT are fairly representative of many modeling environments for reactive systems. As any implementation-independent operational-specification model lacks functional-timing information, a combined approach of applying performance-annotation to the specification model and its integrated simulation with a lower-level will be an effective method to support model continuity.

108

7.2.1 A mechanism to incorporate functional timing into Statecharts We now discuss our observations regarding performance annotation. We based this novel technique on output synchronization, i.e., modifying the original specification so that the transition generating output waits until the analogous output is generated by the performance model. We found the output-based synchronization approach to be effective in dynamically incorporating functional timings, as it allows the designer to concentrate on specifying the correlation between the activities in Statecharts and ADEPT models. The modifications needed in the Statecharts model to incorporate functional timing was automatically derived from these specified correlations. The derivation is based on a set of rules for performance annotation. These rules are unambiguous, generally applicable to any Statecharts model, and preserve the basic structure of the Statecharts. In fact, to the outside world, the operational-specification remains unchanged, except for the fact that it now executes with functional timing information.

7.2.2 A precise definition of conformance We have derived a precise definition of conformance between an Statecharts-model and a concurrently-executing performance model. The definition is based upon the agreement between the observed sequences of outputs of the Statecharts model and its corresponding ADEPT model. The definition of conformance makes no assumption of the internal structure of the specification; it is based totally on externally observable behavior during simulation. Having a precise definition enabled us to develop a simulation-based algorithm to check for conformance between the two models.

7.2.3 A mechanism to check for conformance Checking conformance between models that differ in their domain and levels of abstraction is a nontrivial problem. We address this problem by developing an algorithm that allows one to check the conformance of a Statecharts model and an ADEPT model during simulation. We made two assumptions regarding the models: 1. that the performance model finishes a task later than when the Statecharts model finishes the same task, and

109

2. that all dependencies are explicitly specified in the specification. Both assumptions are reasonable. The first assumption is reasonable since the Statecharts model lacks functional timing, and therefore executes its transitions in zero-delay. The ADEPT model, on the other hand, represents a real implementation, and therefore should include some nonzero delay. The second assumption is reasonable since a dependency that is not explicitly specified is based on implementation-dependent parameters. If the dependency is explicitly specified, a violation of the dependency during simulation will be automatically detected. If, instead, the dependency is not explicitly specified, the violation of the dependency during simulation will not be flagged as the algorithm cannot infer the dependency. However, in such cases it is likely that the behavior of the two models will subsequently diverge, resulting in a mismatch of sequences that will flag an error. Given two realistic design assumptions, we show that the algorithm correctly determines all cases of nonconformance between the two models during a simulation session with respect to explicitly specified dependencies. While not a formal-verification method, our simulation-based method is practical, and its effectiveness is limited by the operational scenarios generated by the simulation environment.

7.3 Developed methodology We have precisely defined each step of the methodology. This methodology seems practical as it could be applied to a reasonably complex example and was able to detect anomalies and obtain performance estimates as we expected. While we have not automated any step of the methodology, we have identified the extent to which each step can be automated, and given all the rules necessary for their automation. While some extra designer effort is required to specify the correlation of activities between the two models, it is minimal in the sense that the designer only specifies those details that cannot be automatically derived. In fact, we found the amount of details required is directly proportional to the number of input and output ports the black-box representation of the Statecharts model. The designer is only required to specify the correlation of the input and output activities of the Statecharts model with corresponding activities in the ADEPT model.

110

By effectively applying the methodology to a number of complex examples, we have demonstrated its feasibility. The methodology is scalable to complex design problems, since it supports modularity by breaking down the larger problem of developing and analyzing a complete implementation into a set of manageable subproblems. The break down was intuitive, with each subproblem representing the development and analysis of an orthogonal component of the overall system under design.

7.4 Experimental results Several examples have been derived from the example of IEEE 802.5 token-ring message passing protocol. We chose a large enough subset of the original description of the system such that the problem size and complexity was nontrivial. There are two categories of results: detection of errors, and very early estimation of system performance.

7.4.1 Detection of errors We were successful in discovering several significant design errors. We discovered subtle implementation errors as well as errors in the specification itself. Most of these errors were discovered due to a lack of conformance between the Statecharts models and their ADEPT counterparts. The lack of conformance was detected automatically during simulation. Another class of errors was detected due to the discovery of unanticipated design scenarios, uncovered due to integrated simulation. While the two models conformed, these errors were predicted since the integrated simulation enabled us to explore operational scenarios that would not have been explored otherwise. The methodology also forced the designer to look closely into the specification, which resulted in detection of several inconsistencies in the specification which may have been left undiscovered until the detailed implementation decisions had been already made.

7.4.2 Performance estimates Due to integrated simulation, we were able to study the impact of an implementation of a component of the system on the rest of the system without necessarily developing a complete performance model of the entire system. Such analysis would have been difficult to

111

perform without the integrated simulation support. More importantly, the proposed implementation of the component was analyzed in the context of rest of the system, instead of being analyzed in isolation, resulting in a more realistic performance estimates of the implementation. While the quantitative accuracy of these estimates was not easy to evaluate in general, we were able to take reasonable advantage of their qualitative aspects when comparing design alternatives.

7.5 Future work This dissertation restricted its scope to the study, development, and the demonstration of feasibility of our methodology. However, a number of interesting issues need to be addressed in the future. Actual implementation of the methodology and its application to real life problems would be necessary to quantify its benefits. The methodology was developed on the platform of Statecharts and ADEPT environments. It would be interesting to extend the ideas developed here to other modeling languages and environments. Extending the current methodology beyond the scope of reactive systems to data-intensive application domains is another challenging problem. We stated certain design assumptions in developing the algorithm for detecting conformance. While the assumptions are reasonable, it is interesting to investigate the ramifications of relaxing the design assumptions on the steps of the methodology. In our approach, we perform the simulation of the integrated model in a VHDL environment. It would be interesting to observe the effect of the simulation on the Statecharts component of the integrated model in the Statecharts modeling environment. This way, a the designer will have a visual feedback of the effects of incorporating ADEPT related simulation information into the Statecharts model. It would also be interesting to translate the ADEPT model into an analogous Statecharts representation. This will allow one to extend analytical techniques from one modeling domain to the other. In fact, we have developed rules to translate an ADEPT model to Statecharts. These rules are presented in Appendix D. The reverse task, i.e., translating Statecharts models into ADEPT representations, is similar to a synthesis problem, and requires some work. Further research is required to identify the advantages of such cross-domain

112

analyses.

7.6 Summary We have developed a design methodology that successfully supports model continuity during the early stages of reactive-systems design. We have explored the theoretical foundations needed to support the methodology and supported our claims by demonstrating its feasibility, practicality and effectiveness through a rich set of examples. We have provided the necessary groundwork for implementing our methodology. We have also pointed out several interesting extensions of our approach and believe in its wide applicability to many areas of digital system design.

113

Bibliography

All90

Allen, J. Performance-Directed Synthesis of VLSI Systems. Proceedings of the IEEE, Feb, 1990.

Avi82

Avizienis, A. "Design diversity — The challenge of the eighties," Proceedings of the 12th Annual International Symposium on Fault-Tolerant Computing, June 22-24, 1982, Santya Monica, California. pp. 44-45.

AYL92

Aylor, J. H. and Waxman, R. and Johnson, B.W. and R.D.Williams,. The Integration of Performance and Functional Modeling in VHDL. In Performance and Fault Modeling with VHDL. Schoen, J. M., Prentice Hall, Englewood Cliffs, NJ 07632, 1992, pages 22-145.

AYL90

Aylor, J. H. and Williams, R. D. and Waxman, R. and Johnson, B. W. and Blackburn, R. L. A Fundamental Approach to Uninterpreted/Interpreted Modeling of Digital Systems in a Common Simulation Environment. Technical Report TR # 900724.0, July 24, 1990.

BAG91

Bagrodia, R. L. and Shen, C. MIDAS: Integrated Design and Simulation of Distributed Systems. IEEE Transactions on Software Engineering 17(10) October 1991.

BAS75

Basili, V. R. and Turner, A. Iterative enhancement, a practical technique for software development. IEEE TOSE SE-1(4) Dec 1975.

BEG92

Beggs, R., Sawaya, J., Ciric, C. and Etzl, J. Automated Design Decision Support System, DAC 92:506:511.

BER91

Berry, G. and Benveniste, A. Another Look at Real-Time Programming, Proceedings of the IEEE 79, 1991

BER89

G. Berry. Real-Time Programming: General Purpose or Special-Purpose Languages, in G. Ritter, ed., Information Processing 98, Elsevier Science Publishers, 1989, pp. 11-17.

BER87

Berry, G. and Cosserat, L. The synchronous programming language ESTEREL and its mathematical semantics. Springer-Verlag, pages 389449, 1987.

BHL94

J. Buck, S. Ha, E. A. Lee, and D. G. Messerschmitt. Ptolemy: A Framework for Simulating and Prototyping Heterogeneous Systems Int. Journal of Computer Simulation, special issue on "Simulation Software Development," vol. 4, pp. 155-182, April, 1994.

114

BKA90

Brilliant, S.S., Knight, J.C., and Ammann, P.E. On the Performance of Software Testing using Multiple Versions. FTCS 20 pp. 408-415. 1990.

BLA85

Blackburn, R. L. and Thomas, D. E. Linking the Behavioral and Structural Domains of Representation in a Synthesis System. DAC 85:374-380.

BOE88

Boehm, B. A spiral model of software development and enhancement. IEEE Computer:61-72 May 1988.

BRA88

Brayton, R. K. and Camposano, R. and De Micheli, G. and Otten, R. H. J. M. and van Eijndhoven, J. T. J. The Yorktown Silicon Compiler System. In Silicon Compilation. Gajski, D.D., Addison-Wesley, 1988.

BUX89

Bux, W. Token-Ring Local-Area Networks and their Performance. Proceedings of the IEEE, Feb, 1989.

CAI75

Caine, S. and Gordon, E. PDL-A tool for software design. AFIPS Press, Montvale, New Jersey, pages 271-276, Anaheim, California, 1975.

CAL93

Calvez, J. P., Embedded Real-time Systems: A Specificatin and Design Methodology, Wiley Series in Software-Engineering Practice, 1993

CAM85

Camposano, R. and Rosenstiel, W. A Design Environment for the Synthesis of Integrated Circuits. 11th EUROMICRO Symposium on Microprocessing and Microprogramming 1985.

CHU89

Chu, C. M. and Potkonjak, M. and Thaler, M. and Rabaey, J. HYPER: An Interactive Synthesis Environment for High Performance Real Time Applications. Proceeding of the International Conference on Computer Design, pages 432-435, 1989.

CHV83

Chvalovsky, V. Decision Tables. Softw. Pract. Exper. 13:423-429 1983.

CUT90

Cutright, E. D. High-level performance modeling of a skip ring local area network using VHDL. Technical Report TR# 900525.0, CSIS, University of Virginia, May 25, 1990.

CUT91

Cutright, E. D. Performance Modeling of Fault-Tolerant Systems using VHDL. In M.S. Thesis. , Department of Electrical Engineering, University of Virginia, 1991.

DAV78

Davis, A. Requirements language processing for the effective testing of real-time software. ACM Softw. Eng. Notes 3(5):61-66 Nov 1978.

DAV88

Davis, A. M. A Comparison of Techniques for the Specification of External System Behavior. Communication of the ACM 31(9):1098-1115 September 1988.

DE90

De Micheli, G. and Ku, D. and Mailhot, F. and Truong, T. The Olympus Synthesis System. IEEE Design and Test of Computers October 1990.

115

DEM79

DeMarco, T. Structured Analysis and System Specification. Prentice Hall, Englewood Cliffs, New Jersey, 1979.

DRO88

Drongowski, P. J. A graphical hardware design language. 25th ACM/IEEE DAC:108-114 1988.

DUT89

Dutt, N. D. and Gajski, D. D. Designer Controlled Behavioral Synthesis. Proceedings of the 26th Design Automation Conference, pages 754-757, 1989.

DUT91

Dutt, N. D. and Kipps, J. R. Bridging High-Level Synthesis to RTL Technology Libraries. Proceedings of the 28th Design Automation Conference, pages 526-529, 1991.

FRA91

Franke, D. W. and Purvis, M. K. Hardware/Software Codesign: A Perspective. Proceedings of 13th International Conference on Software Engineering, pages 344-352, May 13-16, 1991.

GAJ94

Gajski, D. D., Vahid, F., Narayan, S. and Gong, J. Specification and Design of Embedded Systems, PTR Prentice Hall, 1994

GAJ92

Gajski, D. D. and Dutt, N. and Wu, A. and Lin, S. HIGH-LEVEL SYNTHESIS: Introduction to Chip and System Design. Kluwer Academic Publishers, 1992.

GV80

Gmeiner, L. and Voges, U. "Software Diversity in Reactor-Protection Systems: An Experiment", Safety of Computer Control Systems, R. Lauber, Ed., Pergamom Press, pp. 75-79, 1980.

GVN94

Gajski, D.D., Vahid, F., Narayan, S., A system-design methodology: Executable-specification refinement", In Proceedings of the European conference on Design Automation (EDAC), 1994

HAB

Habib, S. Microprogrammed Architectures Specified Using Paisley. Computer Science Department Report.

HAR92

Harel, D. Biting the Silver Bullet. Toward a Brighter Future for System Development. IEEE Computer January 1992.

HAR88

Harel, D. On Visual Formalisms. CACM 31:514-530 1988.

HAR87a

Harel, D. Statecharts: A Visual Formalism For Complex Systems. In Volume 8:Science of Computer Programming. , 1987, pages 231-274.

HAR87b

Harel, D. and Pnueli, A. and Schmidt, J. P. and Sherman, R. On the Formal Semantics of Statecharts. IEEE Press, pages 54-64, July, 1987.

HAR91

Harr, R. E. and Stanculescu, A. G. Applications of VHDL to Circuit Design. Kluwer Academic Publishers, 1991.

116

HAT84

Hatley, D. The use of structured methods in the development of large software-based avionics systems. IEEE Press, pages 6-15, Washington, D.C., 1984.

HEY90

Heydon, A. Miro: Visual Specification of Security et al. IEEE TOSE 16(10):403-414 Apr 1990.

HIL85

Hilfinger, P. N. A High Level Language and Silicon Compiler for Digital Signal Processing. Proceedings of the IEEE Custom Integrated Circuits Conference, pages 213-216, 1985.

HOO92

Hooman, J. J. M. A compositional Axiomatization of Statecharts et al. Theor. Comput. Sci. 101:289-335 1992.

HUI91

Huizing, C. Introduction to design choices in the semantics of Statecharts et al. Inf. Process. Lett. 37:205-213 1991.

HUI88

Huizing, C. Modeling Statecharts Behavior in a fully abstract way et al. Springer-Verlag, New York, pages 271-294, 1988.

IEE88

IEEE. IEEE Standard VHDL Language Reference Manual. IEEE Inc., NY, 1988.

IEE87

IEEE. Token-Ring Local Area Network: Premier Issue, IEEE Network, Jan 1987, Vol. 1, No. 1.

IEE85

IEEE. An American National Standard IEEE Standards for Local Area Networks: Token Ring Access Method and Physical Layer Specifications. ANSI/IEEE Std 802.5-1985 ISO Draft Proposal 8802/5. IEEE Inc., 1985.

ILO92

i-Logix Inc., ExpressVHDL Documentation, 1992

JAC92

Jacome, M. F. and Director, S. W. Design Process Management for CAD Frameworks. 29th ACM/IEEE DAC:500-505 1992.

KNA85

Knapp, D. W. and Parker, A. C. A Unified Representation for Design Information. Proceedings of the 7th International Symposium on Computer Hardware Description Languages and their Applications, pages 337-353, 1985.

KRO92

Kronlöf, K. Method Integration: Concepts and Case Studies, Wiley Series in Software Based Systems, 1992

LOR91

Lor, K. E. and Berry, D. M. Automatic Synthesis of SARA Design Models from System Requirements. IEEE Transactions on Software Engineering 17(12):1229-1240 December 1991.

LSU89

Lipsett, R., Schaefer, C., and Ussery, C. VHDL: Hardware Description and Design. Kluwer Academic Publishers. 1989.

117

MCF90

McFarland, M.C. and Parker, A. C. and Camposano, R. The High-Level Synthesis of Digital Systems. Proceedings of the IEEE, Feb, 1990.

MEN89

Meng, T. H. Y. and Brodersen, R. W. and Messerscmitt, D. G. Automatic Synthesis of Asynchronous Circuits from High-Level Specifications. IEEE Transactions on CAD 8(11) Nov 1989.

MYL92

Mylopoulos, J. and Chung, L. and Nixon, B. Representing and Using Nonfunctional Requirements: A Process Oriented Approach. IEEE TOSE 18 Jun 1992.

NAK90

Nakamura, Y. and Oguri, K. and Nagoya, A. Synthesis from Pure Behavioral Descriptions. In High-Level VLSI Synthesis. Camposano, R. and Wolf, W., Kluwer Academic Publishers, Boston, 1990.

NAR91

Narayan, S. and Vahid, F. and Gajski, D. System Specification and Synthesis with the SpecCharts Language. Proc. ICCAD:266-269 1991.

NG93

Narayan, S. and Gajski., D.D.Features Supporting System-Level Specifications in HDLs. In EURO-DAC ’93, CCH Hamburg, Germany, 1993, pp. 540-545.

OPD92

Opdahl, A. L. and Solvberg, A. A Framework for Performance Engineering during Information System Development. Advanced Information Systems Engineering (Advanced Information Systems Engineering CAiSE `92, Manchester, UK):65-87 May 12-15, 1992 proceedings.

PET77

Peterson, J. Petri-nets. ACM Computing Surveys 9(3):223-252 Sep 1977.

PET81

Peterson, J. L. Petri Net Theory and the Modeling of Systems. Prentice Hall, Englewood Cliffs, N.J., 1981.

PNU91

Pnueli, A. What is in a step: On the Semantics of Statecharts et al. SpringerVerlag, Berlin, pages 244-264, 1991.

RMB81

Ramamoorthy, C.C., Mok, Y.R., Bastani, F.B., Chin, G.H., and Suzuki, K. "Application of a Methodology for the Development and Validation of Reliable Process Control Software", IEEE Transactions on Software Engineering, Vol SE-7, No. 6, November 1981.

RAO90

Rao, R. and Johnson, B. W. and Aylor, J. H. A Building Block Approach to Performance Modeling Using VHDL Technical Report TR# 900116.0, Department of Electrical Engineering, University of Virginia, Jan 16, 1990.

REV94

Revel, S. Integration of Specification and Performance Models. IRESTE 3, July 1994.

ROC82

Rockstorm, A. and Saracco, R. SDL-CCITT specification and description language. IEEE Trans. Commun. 30(6):1310-1318 June 1982.

118

SAR94a

Sarkar, A., Waxman, R. and Cohoon, J.P. System Design Utilizing Integrated Specification and Performance Models. Proceedings, VHDL International Users Forum, Oakland, California, May 1-4, 1994, pp 90-100.

SAR94b

Sarkar, A., Waxman, R. and Cohoon, J.P., A survey of specification methodologies for reactive systems, Submitted to Current Issues in Electronic Modeling, Issue 3, Kluwer Academic Publishers, 1995.

SCH93

Schefström, D. and van den Broek, D., Tool Integration: Environments and Frameworks, Wiley Series in Software Based Systems, 1993.

SCH89

Schefström, D. Building a Highly Integrated Development Environment Using Pre-existing Parts, In IFIP ’89, San Francisco, CA, USA, September 1989.

SCH89

Schefström, D. Building a Highly Integrated Development Environment Using Pre-existing Parts, In IFIP ’89, San Francisco, CA, USA, September 1989.

SE86

Saglietti, F. and Ehrenberger, W. "Software Diversity - Some Considerations about its Benefits and its Limitations." Digest of Papers: SAFECOMP ’86, 5th International Workshop on Achieving Safe Real-Time Computer Systems. France, October 1986.

SOR93

Sortais, S. Establishment of Link Between Specification and Performance Models of a Token Ring LAN. IRESTE 3, April 1993.

SRI90

Srinivasan, S. ADEPT: An Advanced Design Environment Prototype Tool. In M.S. Thesis. , Department of Electrical Engineering, University of Virginia, 1990.

SRI91

Srivastava, M. B. and Brodersen, R. W. Rapid-Prototyping of Hardware and Software in a Unified Framework. EECS Department, UC at Berkeley, 1991.

SSW92

Srinivasan, S. and Sarkar, A. and Waxman, R. and Johnson, B. W. Integrating Operational Specification and Performance Modeling. Fall'92 VHDL International Users' Conference, Washington DC, 10/18/92.

STR87

Strole, N. "The IBM token-ring network — A Functional Overview." IEEE Network, Jan 1987.

SUN91

Sun, J. S. and Srivastava, M. B. and Brodersen, R. W. SIERA: A CAD Environment for Real-TIme Systems. EECS Department, UC at Berkeley, 1991.

SWA92

Swaminathan, G. Colored Petri Net Descriptions for UVa Primitive Modules. Technical Report TR# 920922.0, Department of Electrical Engineering, University of Virginia, Sep 22, 1992.

TYS92

Tyszberowicz, S. and Yehudai, A. OBSERV- A Prototyping Language and

119

Environment. acm Transactions on Software Engineering and Methodology July 1992. VG92

Vahid, F. and Gajski, D. D. Specification Partitioning for System Design. DAC 92:219-224.

VHT86

Vouk, M.A., Helsabeck, M.L., Tai, K.C., and McAllister, D.F. "On Testing of Functionally Equivalent Components of Fault-Tolerant Software". Proc. COMPSAC 86, 1986, pp 414-419.

WAR86

Ward, P. The transformation schema: An extension of the dataflow diagram to represent control and timing. IEEE Trans. Softw. Eng. 12(2):198-210 Feb 1986.

WIN90

Wing, J. M. A Specifier's Introduction to Formal Methods. IEEE Computer September 90.

WOO92

Woo, N. and Wolf, W. and Dunlop, A. Compilation of a single specification into hardware and software. AT&T Bell Labs, 1992.

ZAV91

Zave, P. An Insider's Evaluation of Paisley. IEEE Transactions on Software Engineering 17(3):212-225 March 1991.

ZAV86

Zave, P. and Shell, W. Salient features of an executable specification language and its environment. IEEE Transactions on Software Engineering 12(2):312-325 Feb, 1986.

ZAV84

Zave, P. The Operational versus the Conventional Approach to Software Development. CACM 27(2) February 1984.

120

Appendix A Implementation of Methodology

A.1 Introduction We describe the steps in implementation of integrated simulation of the ADEPT model and its Statecharts counterpart for a component of the system under design. We illustrate these steps using the watchdog-timer component of the token-ring system. To have a better idea of the steps in integration, we describe the Statecharts and ADEPT models of the watchdog timer. The Statecharts representation of the watchdog timer as shown in Figure A.1. The watchdog timer exists in two possible states: WATCH_TM_OFF WATCHDOG_TIMER

WATCH_TM_OFF

en(MON_ACT)

en(MON_INA)

WATCH_TM_ON

TOKEN_ARRIVED

tm(en(WATCH_TM_ON),100) / TIME_OUT

Figure A.1 Statecharts model for watchdog timer and WATCH_TM_ON, depending on whether the watchdog timer is inactive or active respectively. The state of the timer is determined by the state of the corresponding monitor at the station to which the watchdog timer is associated with. The events en(MON_ACT) and en(MON_INA)

event are generated whenever the monitor respectively become active or

inactive. As describe in the Figure A.1, if the timer was inactive when the monitor becomes active, the timer becomes active by entering the WATCH_TM_ON state. Conversely, while in active state, if the monitor becomes inactive, the timer returns to the WATCH_TM_OFF state. While in WATCH_TM_ON state, if the event TOKEN_ARRIVED occurs, the timer exits and reenters the WATCH_TM_ON state. If the timer continuously remains in WATCH_TM_ON

121

state for 100 clock units, the timer exits and reenters the WATCH_TM_ON state, and generates the TIME_OUT event. We now describe the ADEPT model of the proposed implementation. Figure A.2 (a)

Figure A.2 ADEPT model for watchdog timer (a) ADEPT symbol (b) ADEPT component and (b) respectively present both high and low level views of the performance model of a proposed implementation of the watchdog-timer. The model is briefly described as follows. A token arrives at the component through port wt_token_in and exits through port wt_normal. The status of the monitor is communicated through the wt_mon_stat port. The presence or absence of a token at this port respectively indicate the active or inactive state of the monitor. The timer indicates a time-out by placing a token on the wt_normal port.

122

In the remainder of this Appendix, various sub-steps involved in the integration process are described in a chronological order.

A.2 Performance annotation Figure A.3 displays the performance annotated Statecharts of the Watchdog timer. WATCHDOG_TIMER

WATCH_TM_OFF

en(MON_ACT)

en(MON_INA)

en(WATCH_TM_ON_TO_SYNCED) / TIME_OUT TOKEN_ARRIVED WATCH_TM_ON

WATCH_TM_ON_AUX

tm(TOKEN_ARRIVED, 100) / TIME_OUT_RFA WATCH_TM_ON_WAIT

WATCH_TM_ON_TO_HANDLER WATCH_TM_ON_TO_SYNC_WAIT

TOKEN_ACK

TOKEN_RFA TOKEN_ACK / TOKEN_ERR

WATCH_TM_ON_TO_SYNCED WATCH_TM_ON_TO_ERROR

Figure A.3 Performance annotated Statecharts of watchdog timer Notice that the Statecharts still retains the original structure; intricate details of the states WATCH_TM_ON_TO_HANDLER and WATCH_TM_ON can

be hidden.

A.3 Identifying model interfaces using black-box descriptions

123

TOKEN_ARRIVED

en(MON_INA)

Black-box representing Statecharts description of the Watchdog timer

TIME_OUT_ACK TIME_OUT TIME_OUT_ERR

en(MON_ACT)

Figure A.4 ADEPT model for watchdog timer (a) ADEPT symbol (b) ADEPT component To identify the correlation between the Statecharts and the ADEPT model, we first derive black-box abstractions of both models, identifying only the interfaces with heir environments. Clearly, for the Statecharts component, we have four input evens and two output event. The ports TIME_OUT_ACK and TIME_OUT_ERR arise due to performance annotation. The event TIME_OUT_RFA is not visible in the interface as it is not communicated to the environment of the WATCHDOG_TIMER state. The ADEPT model has two input ports and one output port. The correlation between the input and output ports are described next in Section A.4.

124

A.4 Identifying model correlation Figure A.4 summarizes the correlation between the two models. Notice that the correSimulation Activity

Corresponding Statecharts Activity

Corresponding ADEPT activity

The token arrives at the station

generate event TOKEN_ARRIVED as input to Statecharts model

put token on wt_token_in port

Monitor becomes inactive

EN (MON_INA) occurs as input to Statecharts model

put token on wt_mon_stat port

Monitor becomes inactive

EN (MON_ACT) occurs as input to Statecharts model

remove token from wt_mon_stat port

Time-out occurs in ADEPT

TIME_OUT_ACK generated as input to Statecharts model

token arrived on wt_timeout port

Figure A.5 Identifying the correlations between the models lations described are expressed at the abstraction levels and modeling domains of Statecharts and ADEPT environments, instead of being expressed in the language of VHDL. VHDL interpretation of these correlations are described in the following section.

A.5 Generation of VHDL A.5.1 Statecharts model The VHDL code corresponding to the Statecharts model is automatically generated. The following represents the VHDL entity declaration for the Statecharts specification of the watchdog timer. The architecture is also automatically generated by iLogix expressVHDL toolset [ILO92] . entity WATCHDOG_TIMER_S is port ( EN_MON_ACT: in trigger; EN_MON_INA: in trigger; TOKEN_ARRIVED: in trigger; TIME_OUT_ACK: in trigger; TIME_OUT: buffer trigger; TIME_OUT_ERR: buffer trigger ); end WATCHDOG_TIMER_S;

125

A.5.2 ADEPT model The following is the entity declaration the ADEPT model of the watchdog timer. The interface was easily hand extracted from the generated VHDL code. entity WATCHDOG_TIMER_A is port( WT_TOKEN_IN: inout Token_res ; WT_TIMEOUT: inout Token_res ; WT_NORMAL: inout Token_res ; WT_MON_STAT: in Token ); end WATCHDOG_TIMER_A;

A.5.3 Link code Following is the code that is needed for the model to interact. generation of the following can be automated, given information in Figure A.5. entity integrated_model is end integrated_model; architecture behavior of integrated_model is signal

EN_MON_ACT, EN_MON_INA, TOKEN_ARRIVED, TIME_OUT_ACK, TIME_OUT, TIME_OUT_ERR:

trigger;

signal

WT_TOKEN_IN, WT_TIMEOUT, WT_NORMAL, WT_MON_STAT:

Token_res;

-- Statecharts component of the integrated model

component WATCHDOG_TIMER_S is port( EN_MON_ACT: in trigger; EN_MON_INA: in trigger; TOKEN_ARRIVED: in trigger; TIME_OUT_ACK: in trigger; TIME_OUT: buffer trigger; TIME_OUT_ERR: buffer trigger ); end component; for all : WATCHDOG_TIMER_S use entity work.WATCHDOG_TIMER_S(Ar_WATCHDOG_TIMER_S);

126

-- ADEPT component of the integrated model

component WATCHDOG_TIMER_A is port( WT_TOKEN_IN: inout Token_res ; WT_TIMEOUT: inout Token_res ; WT_NORMAL: inout Token_res ; WT_MON_STAT: in Token ); end component; for all : WATCHDOG_TIMER_A use entity work.WATCHDOG_TIMER_A(Ar_WATCHDOG_TIMER_A); -- map ports for WATCHDOG_TIMER_S

port map (

EN_MON_ACT, EN_MON_INA, TOKEN_ARRIVED, TIME_OUT_ACK, TIME_OUT, TIME_OUT_ERR ); -- map ports for WATCHDOG_TIMER_A

port map (

WT_TOKEN_IN, WT_TIMEOUT, WT_NORMAL, WT_MON_STAT ); -- Connect for monitor status

LINK_WATCHDOG_MONITOR_STATUS: process begin if ENMON_INA’event then release_control_token(WT_MON_STAT); end if; if ENMON_ACT’event then place_control_token(WT_MON_STAT, def_colors, mon_def_colors,0 ns); end if; wait on ENMON_INA, ENMON_ACT; end process; -- Connect for token arrival

LINK_WATCHDOG_TOKEN_ARRIVED : process begin IF (TOKEN_ARRIVED’event) THEN wt_token_in_set

Suggest Documents