Approximations for Fixpoint Computations in Symbolic Model Checking? Roderick Bloem1 , In-Ho Moon1, Kavita Ravi2 , and Fabio Somenzi1 1
Department of Electrical and Computer Engineering University of Colorado, Boulder, CO, 80309-0425 fRoderick.Bloem,Mooni,
[email protected] 2 Cadence Design Systems New Providence, NJ, 07974-1143
[email protected]
Abstract. We review the techniques for over- and underapproximation used in symbolic model checking and their applications to the efficient computation of fixpoints.
1 Introduction Model checking has emerged as one of the most effective approaches to the formal verification of complex reactive systems. Model checking is based on the exploration of the state space of the system to be verified. The use of Binary Decision Diagrams (BDDs [4]) has led to Symbolic Model Checking, and has been quite effective at addressing the so-called state explosion problem [5]. However, it is often the case that state explosion translates into BDD explosion. Besides abstraction [12] and compositional reasoning techniques [15], approximation techniques may be very effective in controlling the size of BDDs. This paper reviews existing techniques for computing approximations, and their application to model checking. Due to space limitations, rather than presenting an exhaustive survey, we concentrate on representative techniques and the general framework in which they are used. Approximation techniques help in two distinct ways: On the one hand, the validity of a property can often be established on a simplified model of the system. On the other hand, we can successively refine an abstraction to compute the exact result. This method is often more efficient than computing the exact result directly, because we can direct the search of the state space to avoid large BDDs. This use of approximations is one of the salient features of symbolic model checking. Both overapproximations and underapproximations find use in symbolic model checking. Overapproximation can be used to establish the truth of universal formulae (or the falsity of existential ones) without considering the original system [13, 21, 10]. Overapproximations can also be used to establish an upper bound on the reachable states of a system [7, 17, 9, 18], which allows one to simplify the transition structure of the system. Finally, overapproximations may lead to more efficient computation of greatest fixpoints [3]. Dually, using underapproximations one can prove existential formulae or disprove universal ones on a simplified system [23, 22]. In particular, underapproximations can be applied to non-exhaustive verification. Underapproximations are also instrumental to the efficient computation of least fixpoints [24, 2]. In the context of incremental model checking algorithms like those of [26, 28], underapproximations can also be used to speed up the computation of greatest fixpoints [3]. Under- and overapproximations can be obtained using methods that can be broadly classified into three groups: Changing the formula, changing the model, or changing the computation. In this paper we discuss the last two. The model is changed by modifying its transition relation. An overapproximation can be obtained by decomposing the model into a collection of simple subsystems [8, 9]. The approximations consist of summarizing the information passed among subsystems so as to ensure small BDDs. Another approach to obtain an overapproximation is using hints [25]. Hints constrain the transition relation by disabling certain transitions, thereby avoiding problematic behavior and producing an underapproximation. They can be used dually for overapproximations [3]. The other way we discuss to obtain an approximation consists of changing the computation. For underapproximations, techniques that operate directly on the BDDs representing transitions and sets of states have proved successful [23, 22]. These techniques aim at extracting from a function f another function g f such that it preserves most minterms of f with a substantial reduction in the number of nodes. We discuss the preliminaries in Section 2, followed by a general description of how approximation can be used for model checking in Section 3. Then, in Section 4 we describe how to obtain approximations by either changing the transition relation or the computation. We conclude with Section 5. ? This work was supported in part by SRC contract 98-DJ-620 and NSF grant CCR-99-71195.
2 Preliminaries We model the systems to be verified as finite state machines, whose inputs, outputs, and states are encoded by strings over B = f0; 1g. Let x = fx1 ; : : : ; xn g, y = fy1 ; : : : ; yn g, and w = fw1 ; : : : ; wp g be sets of variables ranging over B . A (finite state) machine M is a pair of boolean functions hT (x; w; y ); I (x)i, where T : B 2n+p ! B is 1 if and only if there is a transition from the state encoded by x to the state encoded by y under the input encoded by w. I : B n ! B is 1 if the state encoded by x is an initial state. The sets x, y , and w are called the present state, next state, and input variables, respectively. Given a set A of atomic propositions, a labeling function L : B n ! 2A assigns a subset of A to each state of the machine. Properties can be specified in various ways. For instance, CTL* is a branching time logic that augments propositional logic with path quantifiers (E and A ) and temporal operators ( U, R, X , G , and F ). Given a finite state machine M , a state s of M , and a property ', we write M; s j= ' if and only if ' holds at state s of M . We simply write M j= ' if M; s j= ' holds for all initial states s. Model checking of CTL* formulae reduces to the computation of (propositional) -calculus formulae. For instance, checking the CTL* formula EF translates into computing the -calculus formula Z: _ EX Z , which designates the least fixpoint () of the function _ EX Z . The variable Z is the iteration variable, ranging over sets of states of M , and EX Z is the set of all the predecessors of states in Z according to T . Translation of CTL* formulae other than EF , or model checking of ! -regular properties, may also entail composition of M with suitable automata. In the sequel we assume that the compositions have been performed, and we concentrate on the evaluation of the -calculus formulae. The formulae of -calculus are formed by recursively applying fixpoint, modal (i.e., EX , AX , EY , and AY ), and boolean operators to atomic propositions and variables. Negation is restricted to atomic propositions. This guarantees the monotonicity of the functions whose fixpoints are computed. The result of the evaluation of ' is a set of states of M , called the satisfying set of ', and denoted by satM ('). If this set includes the initial states, the model checking question is answered affirmatively. The computation of least and greatest fixpoints in -calculus formulae is performed by the technique of successive approximations. In the case of a least fixpoint Z: , the iteration variable Z is initialized to the empty set, and (Z ) is repeatedly evaluated until convergence is achieved. For a greatest fixpoint Z: , Z is initialized to all states. Convergence to the correct result is guaranteed by the finiteness of the set of states and by the monotonicity of . It is possible to “jump start” the computation of a least fixpoint by initializing Z to any underapproximation of the fixpoint. The same holds for overapproximations and greatest fixpoints. The functions that are found in -calculus model checking typically contain the EX operator or its time-dual EY (which computes all the successors of a set of states). The evaluation of these two operators is at the heart of model checking algorithms. Given the transition relation T (x; w; y ) and the characteristic function of a set of states Z (y ), the set of predecessors of the states in Z is computed as
EX Z = 9w; y: Z (y) ^ T (x; w; y) :
(1)
The result of this preimage computation is the characteristic function of the set in terms of the x variables. Likewise, the set of successors of the states in Z (the image of Z ) is computed as a function of the y variables by:
EY Z = 9x; w: Z (x) ^ T (x; w; y) :
(2)
The identities AX Z = :EX :Z and AY Z = :EY :Z are used for the two remaining modal operators. Model checking of -calculus formulae involves the manipulation of large sets of states and transitions. In Symbolic Model Checking the sets are described by their characteristic functions. Binary Decision Diagrams (BDDs) are used to represent these functions. BDDs are directed acyclic graphs such that each node is associated to a Boolean function. Each node, except the two sinks 0 and 1, is labeled by a variable. The labels along a path from one root to a sink must obey a given order. The BDD for a Boolean function is canonical for a chosen order. The size of the BDD depends—sometimes critically—on the order.
3 Approximate Model Checking In this section we discuss the application of approximation techniques to model checking. Approximations were originally only used for universal properties such as those in ACTL, ACTL* [15, 13] or those defined by ! -automata [1, 12]. More recently, approaches for mixed properties have been introduced [11, 20, 21, 14, 10]. A property ' is existential if T T 0 implies sathT ;I i (') sathT 0 ;I i ('). An existential property has a larger satisfying set in a graph with more edges. A property is universal if it is the negation of an existential property. Universal properties have a smaller satisfying set in a graph with more edges. The existential fragment of -calculus is obtained by disallowing the AX and AY operators. Analogously, the universal fragment consists of formulae that do not contain EX and EY . A property that is neither universal nor existential is mixed. A property that imposes a requirement on all paths is universal, and hence ACTL, ACTL* and linear formalisms such as LTL and ! -automata can only express universal properties.
If I is not included in an overapproximation of sathT ;I i ('), we can conclude that hT; I i 6j= '. On the other hand, overapproximations of sathT ;I i (') cannot be used to prove that a formula is true. Likewise, hT; I i j= ' if I is included in an underapproximation of sathT ;I i ('), but underapproximations of sathT ;I i (') cannot be used to prove that a formula is false. Since -calculus formulae are monotonic except in the atomic propositions, underapproximations to their satisfying sets can be obtained by underapproximating the satisfying sets of their sub-formulae. How approximations can be computed for the modal operators is described in Section 4. For sub-formulae of the AX or AY type, one should notice that the techniques that return underapproximations of EX and EY (e.g., elimination of arcs from the transition relation) produce overapproximations of AX and AY , and vice versa. Hence, for mixed formulae, both types of approximations are needed to approximate sat hT ;I i ('). In general, starting from coarse approximations, one can successively refine them, until the truth of the formula can be established. In fact, one can use underapproximations and overapproximations at the same time, and refine them both. This often allows one to decide the truth of the formula before its exact satisfying set is computed. Two more points are worth mentioning. First, since fixpoint computations can be “jump started,” the effort spent in computing an approximation is not wasted even if the truth of the formula can not be decided, since the satisfying sets can be reused when the approximation is refined. Second, the complement of the overapproximation of a satisfying set can be used as a don’t-care condition when the approximation is refined.
4 Approximate Fixpoint Computations In this section, we examine how fixpoint computations can be approximated to reduce the cost of computation. This can be done in three major ways: Approximating the formula whose satisfying set is being computed, approximating the transition relation (embedded in the functional) and modifying the computation itself. The first approach is not discussed in this paper. Sections 4.1 and 4.2 examine the latter two approaches. 4.1 Approximating the Transition Relation The transition relation T of a finite state machine is usually given as the conjunction of bit relations. Image computation in this case consists of repeated conjunctions:
V
EY Z = 9x; w: Z (x) ^ 1 T (x; w; y ) : i
n
b i
i
An analogous formula holds for preimage computation. Image (preimage) computations, in the worst case, may result in exponential BDD sizes due to quantification. Hence they are considered the most expensive operations in model checking. To avoid worst case behavior whenever possible, input and state variables are quantified as soon as they appear in one conjunct only. The order in which the bit relations are considered affects the so-called quantification schedule, and is therefore important to keep under control the size of the intermediate BDDs. Approximations of the transition relation should also strive to improve the quantification schedule, or otherwise reduce the number of variables involved in the conjunctions. In this section we describe two such approaches to approximation. It should noted that over- and under-approximation of the transition relation results in over- and under-approximation of images and preimages, respectively. Machine Decomposition. A state space decomposition, D, of a finite state machine hT (x; w; y ); I (x)i is a collection of sets fxj xg such that j xj = x. A finite state machine decomposed according to D is a collection of machines fhTj ; Ij ig such that
S
V
T (x; w; y) = 2 j T (x; w; y ) I (x) = 9(x n x ):I (x): j
i
j
x
b i
i
j
That is, Tj is the conjunction of the bit relations for the variables in xj , while Ij is the projection of the initial states onto the subspace of submachine j . If xj \ xk = ; for j 6= k the decomposition is non-overlapping. Once a machine is decomposed, each sub-machine can be analyzed in isolation [15]. Often, though, ignoring the interaction of a sub-machine with its neighbors (in terms of shared variables) leads to excessive approximation and consequent inability to prove the properties of interest. Therefore, it is often advantageous to retain, albeit in summarized form, information on what inputs the neighbors can supply to a given machine [7]. Several mechanisms are available to control the trade-off between the degree of approximation and the simplification of the computation. First of all, a good decomposition groups together those state variables that are tightly related. The description of the system to be analyzed, which is often hierarchical, may provide useful information for the identification of related variables. This information can be supplemented by the analysis of the dependencies of the state variables upon one another and the inputs [8].
The accuracy of the approximation depends on the granularity of the decomposition. Sometimes allowing some overlap among blocks of the decomposition leads to much improved results without great increase of the computational effort [9]. Conversely, the CPU and memory requirements can be decreased by existentially quantifying variables in the Tj ’s. This corresponds to to tearing a connection between two machines [13], and adding a new input. Hints. Another approach is to approximate the transition relation based on its high level structural implementation. The large BDD sizes of complex high level data structures such as ALUs, register files is well-known. This knowledge can be applied to modify the transition relation and circumvent the BDD size explosion. The modification is done by applying hints [25], predicates (H (x; w)) in terms of inputs and states of the machine. Hints decompose the transition relation by separating the compound effect of the data structures, allowing only part of the structure or its functionality to be in effect at any point in time. The transition relation may be either underapproximated as T (x; w; y ) ^ H (x; w) [25], or overapproximated as T (x; w; y ) _ :H (x; w) [3]. The underapproximation allows a disjunctive decomposition of the transition relation. When using hints, only transitions from the region of the state space within H (x; w) are considered. Different regions of the state space can be explored using different hints. In an overapproximation, the original behavior of the transition relation is preserved within the hint and where the hint is false (:H (x; w)), the behavior is overapproximated. Hints reduce the BDD sizes by improving the quantification schedule and sometimes reducing the number of variables in the transition relation. Effective hints require user knowledge of the machine. Detailed examples are presented in [25, 2, 3]. For instance, reducing an ALU to only one opcode or restricting the register files to only one address at a time are effective in reducing the BDD sizes during the fixpoint computation. Automating the choice of hints involves identification of data structures causing large BDDs a priori and extraction of these data structures from the high level machine description. The modification of the fixpoint computation on application of these hints in discussed in the next section. 4.2 Modifying the Fixpoint Computation High Density Reachability Analysis. Approximating the fixpoint Z: (Z: ) may be achieved by approximating the fixpoint iterate Z . Z may be approximated in any subset of the iterations. High Density reachability analysis [23] was initially presented as an approach to compute a least fixpoint by computing an underapproximation at every iteration in the context of reachability analysis. (It can also be applied to obtaining other approximations.) The approach proposes to underapproximate the fixpoint iterate whose image/preimage is computed every step (control measures), while minimizing the approximation at every step (efficiency measures) [24]. Cabodi et al. [6] and Narayan et al. [19] propose related modifications of the fixpoint computation. Approximating Z is combined with addressing the BDD sizes in image/preimage computations in a least fixpoint computation by extracting a dense [23, 22] BDD approximation of the minimal input to image/preimage computations—the frontier set (difference between two successive fixpoint iterates). The density of a BDD is defined as the ratio of the number of minterms to the number of nodes in the BDD. A dense approximation is aimed at minimizing the underapproximation of a BDD. An overapproximation is obtained by working on the complement. The modification of the computation in this manner leads to empty frontier sets before termination—dead-ends, requiring the generation of a non-empty frontier set from the iterate Z . Such computations are also expensive in terms of BDD operations since they may involve computing the image/preimage of the entire iterate Z . The termination check is a similar expensive operation. In [24], an overall approach is proposed that tries to minimize expensive BDD operations, while computing a close approximation to the fixpoint. Approximate Reachability Analysis. Given a machine decomposition, one can compute an overapproximation to the reachable states by computing images for the submachines in appropriate sequences until convergence is achieved. The image computation for one submachine uses the reachable states of all sub-machines as constraints. Several detailed schemes are discussed in [7]. In the frame-by-frame (FBF) approach, the sub-machines are considered in round-robin fashion. If all states reached by the sub-machines are used as constraints, one obtains the Reached FBF (RFBF) method. If on the other hand, only the states in which the sub-machines could be after the last iteration are considered, one obtains a more expensive, yet more accurate method known as To FBF (TFBF). In the machine-by-machine (MBM) approach, each submachine is evaluated repeatedly until it yields no more states. Two variants of MBM have been studied. In the first [7], the initial assumption is made that all states are reachable in the submachines. Subject to this assumption, each sub-machine is analyzed. This analysis may indicate that not all states are reachable in some sub-machines. The sub-machines are then re-analyzed under this more accurate assumption. The process continues until the assumptions cannot be further refined. The resulting computation is a greatest fixpoint iteration of successive least fixpoint computations. In the second variant of MBM [18], the initial assumption is that only the initial states of each sub-machine are reachable. This replaces the outer greatest fixpoint computation of the first form of MBM with a least fixpoint. Hence, this second variant is called Least-fixpoint MBM (LMBM). It can be shown that LMBM computes exactly the same approximation as RFBF, while being substantially faster in practice.
The schemes used for reachability analysis can be extended to the computation of other least fixpoints. It should be noted that when applied to reachability analysis, the approximation scheme we have discussed is akin to the assume-guarantee style of reasoning [16]. Guided Search. The application of hints results in a set of approximations to the original transition relation. Their use in the computation of fixpoints is called guided search. When an underapproximation of the fixpoint is to be calculated, the hints are applied so as to produce underapproximations of the transition relation. Conversely for overapproximations. When convergence to the exact fixpoint is requested, the last approximation coincides in both cases with the original transition relation. The usefulness of using approximations before the original transition relation comes from the ability it affords to compute a good approximation of the desired fixpoint with very modest effort. This approximation of the result is used to “jump start” the computation that employs the original transition relation. Because this last computation starts close to fixpoint, convergence is often fast and the BDDs involved remain small. This approach to the computation of fixpoints takes advantage of the flexibility available in the evaluation of functionals whose disjunction amounts to the transition relation of the system [27]. Using the result obtained with one approximation as starting point for the next approximation is a crucial feature of fixpoint computation based on hints. The switch from one approximation to the next, however, may be critical in terms of performance, because it may involve the computation of the image or preimage of a set of states with many elements and a large BDD. For this reason, the techniques developed for dead-end resolution in high density traversal are applied when moving from one approximation to the next.
5 Conclusions This paper discusses the framework for approximate model checking. Overapproximations and underapproximations may be used depending on the structure of the formula to be verified. Techniques for two different approaches—approximation of the transition relation and modification of the computation are reviewed. The two approaches are orthogonal and can be used in conjunction with each other. Experimental results suggest that machine decomposition and approximate reachability techniques are very effective in computing an overapproximation of the reachable space of large designs. The application of this overapproximation to restrict the state exploration in model checking [17] has been found to speed up model checking computations considerably. High density reachability analysis is found to be useful in deriving underapproximations of the reachable space in large designs. This may be applied to checking the truth (falsity) of existential (universal) formulae. However, the overhead with this technique may outweigh its advantage in small or medium-sized designs. Hints are effective when some information on the model is available. A catalog of useful hints is listed in [25]. These hints have been applied to reachability analysis, LTL model checking and CTL model checking [24, 2, 3] and have proven effective in cases where problematic data structures are known a priori. The utitily of applying hints as overapproximations has to be further investigated. Automation of hints is a topic for future research. In summary, a combination of various techniques is required to tackle the robustness issue in model checking. Approximations are required to avoid BDD size explosions in model checking computations and are largely automated. Abstractions, relying on intuition about the design, may be applied independently to reduce the complexity in a top-down fashion. The complementary advantages of automation vs. more powerful reductions provided by the different approaches need to be balanced in a practical verification environment.
References 1. F. Balarin and A. L. Sangiovanni-Vincentelli. An iterative approach to language containment. In C. Courcoubetis, editor, Fifth Conference on Computer Aided Verification (CAV ’93). Springer-Verlag, Berlin, 1993. LNCS 697. 2. R. Bloem, K. Ravi, and F. Somenzi. Efficient decision procedures for model checking of linear time logic properties. In N. Halbwachs and D. Peled, editors, Eleventh Conference on Computer Aided Verification (CAV’99), pages 222–235. Springer-Verlag, Berlin, 1999. LNCS 1633. 3. R. Bloem, K. Ravi, and F. Somenzi. Symbolic guided search for CTL model checking. In Proceedings of the Design Automation Conference, Los Angeles, CA, June 2000. To appear. 4. R. E. Bryant. Graph-based algorithms for boolean function manipulation. IEEE Transactions on Computers, C-35(8):677–691, August 1986. 5. J. R. Burch, E. M. Clarke, K. L. McMillan, D. L. Dill, and L. J. Hwang. Symbolic model checking: 1020 states and beyond. Information and Computation, 98:142–170, 1992. 6. G. Cabodi, P. Camurati, and S. Quer. Improved reachability analysis of large finite state machines. In Proceedings of the International Conference on Computer-Aided Design, pages 354–360, Santa Clara, CA, November 1996.
7. H. Cho, G. D. Hachtel, E. Macii, B. Plessier, and F. Somenzi. Algorithms for approximate FSM traversal based on state space decomposition. IEEE Transactions on Computer-Aided Design, 15(12):1465–1478, December 1996. 8. H. Cho, G. D. Hachtel, E. Macii, M. Poncino, and F. Somenzi. Automatic state space decomposition for approximate FSM traversal based on circuit analysis. IEEE Transactions on Computer-Aided Design, 15(12):1451–1464, December 1996. 9. S. G. Govindaraju, D. L. Dill, A. J. Hu, and M. A. Horowitz. Approximate reachability with BDDs using overlappping projections. In Proceedings of the Design Automation Conference, pages 451–456, San Francisco, CA, June 1998. 10. J.-Y. Jang, I.-H. Moon, and G. D. Hachtel. Iterative abstraction-based CTL model checking. In Proceedings of the Conference on Design Automation and Test in Europe (DATE00), pages 502–507, Paris, France, March 2000. 11. P. Kelb, D. Dams, and R. Gerth. Practical symbolic model checking of the full -calculus using compositional abstractions. Technical Report 95-31, Department of Computing Science, Eindhoven University of Technology, 1995. 12. R. P. Kurshan. Computer-Aided Verification of Coordinating Processes. Princeton University Press, Princeton, NJ, 1994. 13. W. Lee, A. Pardo, J. Jang, G. Hachtel, and F. Somenzi. Tearing based abstraction for CTL model checking. In Proceedings of the International Conference on Computer-Aided Design, pages 76–81, San Jose, CA, November 1996. 14. J. Lind-Nielsen and H. R. Anderson. Stepwise CTL model checking of state/event systems. In N. Halbwachs and D. Peled, editors, 11th International Conference on Computer Aided Verification (CAV’99), pages 316–327, 1999. LNCS 1633. 15. D. E. Long. Model Checking, Abstraction, and Compositional Verification. PhD thesis, Carnegie-Mellon University, July 1993. 16. K. L. McMillan. Verification of infinite state systems by compositional model checking. In Correct Hardware Design and Verification Methods (CHARME’99), pages 219–233, Berlin, September 1999. Springer-Verlag. LNCS 1703. 17. I.-H. Moon, J.-Y. Jang, G. D. Hachtel, F. Somenzi, C. Pixley, and J. Yuan. Approximate reachability don’t cares for CTL model checking. In Proceedings of the International Conference on Computer-Aided Design, pages 351–358, San Jose, CA, November 1998. 18. I.-H. Moon, J. Kukula, T. Shiple, and F. Somenzi. Least fixpoint approximations for reachability analysis. In Proceedings of the International Conference on Computer-Aided Design, pages 41–44, San Jose, CA, November 1999. 19. A. Narayan, A. J. Isles, J. Jain, R. K. Brayton, and A. L. Sangiovanni-Vincentelli. Reachability analysis using partitioned ROBDDs. In Proceedings of the International Conference on Computer-Aided Design, pages 388–393, November 1997. 20. A. Pardo and G. D. Hachtel. Automatic abstraction techniques for propositional -calculus model checking. In O. Grumberg, editor, Ninth Conference on Computer Aided Verification (CAV’97), pages 12–23. Springer-Verlag, Berlin, 1997. LNCS 1254. 21. A. Pardo and G. D. Hachtel. Incremental CTL model checking using BDD subsetting. In Proceedings of the Design Automation Conference, pages 457–462, San Francisco, CA, June 1998. 22. K. Ravi, K. L. McMillan, T. R. Shiple, and F. Somenzi. Approximation and decomposition of decision diagrams. In Proceedings of the Design Automation Conference, pages 445–450, San Francisco, CA, June 1998. 23. K. Ravi and F. Somenzi. High-density reachability analysis. In Proceedings of the International Conference on Computer-Aided Design, pages 154–158, San Jose, CA, November 1995. 24. K. Ravi and F. Somenzi. Efficient fixpoint computation for invariant checking. In Proceedings of the International Conference on Computer Design, pages 467–474, Austin, TX, October 1999. 25. K. Ravi and F. Somenzi. Hints to accelerate symbolic traversal. In Correct Hardware Design and Verification Methods (CHARME’99), pages 250–264, Berlin, September 1999. Springer-Verlag. LNCS 1703. 26. O. V. Sokolsky and S. A. Smolka. Incremental model checking in the modal mu-calculus. In D. L. Dill, editor, Sixth Conference on Computer Aided Verification (CAV ’94), pages 351–363. Springer-Verlag, Berlin, 1994. LNCS 818. 27. F. Somenzi. Symbolic state exploration. Electronic Notes in Theoretical Computer Science, 23, 1999. http://www.elsevier.nl/locate/entcs/volume23.html. 28. G. Swamy. Incremental Methods for Formal Verification and Logic Synthesis. PhD thesis, University of California at Berkeley, 1996. UMI publication 9723211.