PANEL: Hierarchical and Incremental Verification for System Level Design: Challenges and Accomplishments Chair: Grant Martin Cadence Berkeley Labs Berkeley, CA, USA email:
[email protected]
Organizer: Sandeep Shukla Electrical and Computer Engineering Department Virginia Polytechnic and State University Blacksburg, VA, USA email:
[email protected]
Abstract This panel will focus on two problems in formal and semiformal verification of co-design models. First one can be categorized as Hierarchical verification or compositional verification. The second one is Incremental verification. Advances and challenges in both of these are important for realization of verification strategies for reasonable sized models, including hardware models, as well as hardware/software co-design models. This short position paper explains the MEMOCODE committee’s view of these problems, followed by short position statements by the panelists.
1. Hierarchical and Compositional Verification In many design situations, pre-designed blocks (“IPs” ) are used to build a system. We assume that each of the blocks is correctly designed with respect to some formal specifications, and measures (high-level block-functional specification set of proven block-properties, etc.). How can the following problems be solved: 1. If the blocks which are known to be correct are composed to build a system - what is a definition of correctness of the system and how can it be specified, represented and used? 2. Blocks typically make assumptions about their environment. If the assumptions are not satisfied the blocks do not work correctly. A necessary prerequisite for the correctness of a composed system is the verification of the blockassumptions. How can such assumptions be specified, composed and efficiently verified for large systems? 3. How can a model of a system be built (i) which is smaller than the composition of all blocks and (ii) which can be used to prove partial properties like successful communication of some blocks? How can we guarantee that the smaller model is conservative, i.e., still allows verification and not only bugfinding? 4. How can we solve the above problems if (as is commonly the case) the internal details of some the blocks are unknown?
5. Is there a role for standardized protocols for block communication? Is there a possibility to re-use verification knowledge of these protocols? 6. How can the notion of a block functional specification be formalized so that efficient system functional specifications can be composed from block specifications?
2. Incremental Verification Automatic generation of functional test cases and coverage collection are key techniques for automating verification. Independent of the level of abstraction, we can identify the following main tasks: Collecting coverage goals (the goals can relate to an implementation or an abstract specification) Coding the coverage goals in a machine-readable form for automatic processing Effectively generating functional tests Collecting coverage data and reporting results Forwarding coverage data back to the test generation While far from trivial at the RT-level, in the context of higher level description formalisms each of the items above may involve additional issues that need to be addressed. This includes a very broad scope of issues that involves performance, reusability, completeness, formal verification, etc. Through this panel, we plan to discuss the cogent issues and possible solutions.
3. Panelist Position Statements 3.1 Marten Van Hulst Formal Property Verification (FPV) on block level has, despite its limited scope, enjoyed moderate popularity due to the fact that there is an automated translation from the RTL description to the system model that is checked, keeping a close relationship between the two. At the system level, FPV is only feasible when far
Proceedings of the First ACM and IEEE International Conference on Formal Methods and Models for Co-Design (MEMOCODE’03) ISBN 0-7695-1923-7/03 $17.00 © 2003 IEEE
more extensive (manual) abstractions are performed on the system description, to the point where the link between real and abstracted system becomes unclear. As a result, it is hard to justify large investments in formally verifying the abstract model, as the results cannot be carried over to the concrete system. It is therefore reasonable to expect a formal language to be used in system level design and verification will first and foremost have to enable cooperation between automated test generation, specification of dynamic checkers and coverage goals, next to supporting “true” formal analysis tools.
3.2 Franco Fummi Automatic generation of functional test patterns could be based on high-level fault models. In fact, targeting faults is an efficient criterion to guide the generation of patterns. However, such fault models must show the following characteristics: High correlation with design errors. Identified test patterns targeting faults must be able to discriminate the erroneous behavior of the system affected by a design error to increase verification capability. Independence from hardware and software models. Highlevel functional testing must be applied to descriptions, which are possibly not partitioned, thus requiring fault models applicable to both hardware and software descriptions. Application to different abstraction levels. The same fault model must be applied from system-level to gate-level descriptions in order to inherit and enrich test patterns passing from one level to the following ones. Direct application to language constructs in order to preserve the majority of already addressed faults during the code refinements typical of incremental design. Moreover, to explicitly address hierarchical verification, some methods are required to justify and propagate test patterns across modules. Genetic algorithms (GAs), SAT and constraint logic programming (CLP) could be methods to face this issue. A high level fault model with these characteristics allows defining a continuous testing flow, from system-level to gate-level, which implies the confluence of verification and testing.
3.3 Carl Pixley Hierarchical, compositional and closed-loop coveragestimulus methodologies have been the Holy Grail of EDA for many years. So far there has not been many practical tools to support these goals. However, one thing is emerging that is helping in some design group: constraint-based verification. Constraints are kinds of assertion that can be used in simulation to (1) monitor inputs to a design for correctness and (2) generate valid inputs to a design. Constraints can also be used in formal model checking to define the environment of a block under verification, when the block is connected to its true environment, they can become assertions to be verified. Due to the latter property, constraints can be used as a basis for assume/guarantee (i.e., compositional) reasoning. In addition, constraint-based environments can be synthesized so that they can be compiled onto an emulator, so
that constraint-based simulation can be run at emulation speed, without the overhead of communication with a software driver. True constraint-based verification must have the above properties. This approach has been used at module, block and unit levels at various companies. However, so far as I know, this has not yet been scaled up to a full SoC level of integration in practice. The power of this approach is that design information is captured once and used in many ways, reducing the cost of verification. However, there is no silver bullet in verification. Formal information must be captured somehow. C/C++/SystemC/etc. highlevel models are used ubiquitously in the semiconductor industry, however we still lack the ability to compare high-level models to RTL models and thus leverage the efficiency of high-level verification.
3.4 Forrest Brewer I must preface my position with the disclaimer that my experience is primarily in synthesis, not verification. However, for large scale design activities, management of design correctness (or at least design consistency) are means to lower project risk and costs. Despite this, not a single large scale project I have experience with had a formal abstract model which survived the RT-level design cycle. There were many reasons for this, most had to do with the cost of maintaining an abstracted model which was sequentially related to the RT-design which invariably underwent substantial change as constraints from the physical level (layout, delay, power...) became evident. The primary medium for this change are timing changes which are required to meet performance or quality goals. These changes often required functional change to accommodate similar overall behavior. The strategy for maintaining consistency was invariably simulation-based validation of the system, often augmented by fault models and coverage tools. An important consequence of this is that the systems in question were never even fully specified (other than at the gate level). Given that future systems will be constructed out of a large number of externally designed components, some mechanism for providing abstracted models of the component behavior is required, even if the ”verification” is simulation or a symbolic trajectory heuristic. This would be sufficient to validate the interfaces for simple communication protocols, however, much deeper abstractions are needed to verify behavioral correctness or formal property checking. My suspicion is that design checking by simulation augmented by fault modeling where the faults are realized from common design application errors will be the solution of practice, if only because of the ease of integration with current design flows. On the other hand, for portions of the system with deep sequential complexity, solutions will range from formal property verification to prespecified families of protocols with known safe behavior or easy verifiability.
3.5 Hans Eveking Behavioral, synthesizable RT-level descriptions will remain the only complete documentation of a design in the near future. Most formal verification engines, however, work at the gate-level, and do not exploit the compact and abstract view of RT-level docu-
Proceedings of the First ACM and IEEE International Conference on Formal Methods and Models for Co-Design (MEMOCODE’03) ISBN 0-7695-1923-7/03 $17.00 © 2003 IEEE
mentations. I believe that there is a large potential for hierarchical system-level verification employing information at the RT-level. A method which is able to exploit this potential should comprise RT-level representations which are much easier to manipulate and modify than, e.g., VHDL or Verilog descriptions techniques specifically developed for system-level properties like the interoperability of blocks or flow-properties automated techniques to produce property-specific abstractions for large designs. Such a method could rise the applicability of formal property verification to the system level without requiring additional effort for the maintenance of an abstract model.
3.6 Constance Heitmeyer To describe the accomplishments and verification challenges that lie ahead, I will describe briefly the formal techniques and tools we are applying to supply evidence of the security of a small software-based communications system. To be certified for operational use, this system must enforce data separation, i.e., prevent data in one memory part to “leak” into (or otherwise influence) data in another memory part. Evidence of the system’s security will consist of formal specifications and formal verification using theorem proving technology that the specifications satisfy selected properties, i.e., enforce separation. To ensure that only legal data flows occur, the system includes a two-part separation kernel to mediate all memory accesses. Our approach to providing evidence that the system operates securely consists of three major steps. First, we will develop a formal statement of the notion of separation (the security model) and an “abstract” formal specification of the system represented as a state machine and then prove, using automated theorem proving, that the abstract specification satisfies the security model. Second, we will develop 1) abstract state machine specifications of the two kernel parts and 2) “concrete” specifications of the code implementing the kernel parts and prove, using theorem proving, that each concrete specification refines the corresponding abstract specification. Code walk-throughs will be used to show that the code implementing each part satisfies the code specification. Finally, the abstract specifications of the two kernel parts and an abstract specification of the remaining system will be composed, and we will prove formally, again using theorem proving technology, that the composition satisfies the abstract system specification. Future challenges include the following:
3.7 Moderator Comments: Grant Martin A few years ago a colleague at BNR/Nortel, Robert Hum, with long experience in verification tools and methods, noted that the inevitable fate of designers was going to be ”Death by Simulation”. Despite all the advances in verification - formal, semiformal and completely informal - this still seems to be the destiny of our design teams. The needs for abstract formal models of design entities are well described by our panelists, but the practical problems in relying on them and on formal verification, and not on endless amounts of simulation, are well-described too. So we do seem destined to live with large simulations for a long time. Ideas such as constraint-based and assertion-based verification will certainly help in getting the highest value out of simulation and emulation, and augmenting these with fault modeling and formal interface and protocol verification methods seem to be fruitful ways to advance. Measuring coverage to guide verification generation also seems a fruitful idea for controlling the simulation explosion as well. This panel discussion should be a good chance for all this variety of views and ideas to be aired, and we need active audience participation in the debate as well.
4. Acknowledgement The formulation of the topic of this panel was made by the conference program committee members through lengthy email discussions, in a process of deciding on a list of most challenging issues confronting the adoption of formal methodologies in system design in the large scale. A number of issues were identified and this issue of hierarchical and incremental verification was considered by the program committee of great importance and hence we decided on a panel discussion. Our special thanks to all the panelists and to the original owner of these two issues during its formulation, to Forrest Brewer, Hans Eveking, Grant Martin, Harry Foster, Hillel Millerand Marten Van Hulst.
Scalability. Our current approach is likely to work because the system is quite small. How to scale this approach is an important problem. Test case generation. Code walk-throughs are tedious, error-prone, and labor-intensive. A more cost-effective way to demonstrate that the code satisfies the code specification is to automatically construct test cases from the specification (based on some coverage criterion) and then use the test cases to evaluate the code.
Proceedings of the First ACM and IEEE International Conference on Formal Methods and Models for Co-Design (MEMOCODE’03) ISBN 0-7695-1923-7/03 $17.00 © 2003 IEEE