Document not found! Please try again

Coverage Driven High-Level Test Generation Using a ... - IEEE Xplore

4 downloads 0 Views 658KB Size Report
Index Terms—Fault coverage, high-level test generation,. Horner expansion diagram .... under test is not necessary to be available and there is no limitation for ...
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 29, NO. 5, MAY 2010

737

Coverage Driven High-Level Test Generation Using a Polynomial Model of Sequential Circuits Bijan Alizadeh, Mohammad Mirzaei, and Masahiro Fujita

Abstract—This paper proposes a high-level test generation method which considers the control part as well as data path of a register transfer level circuit as a set of polynomial functions to generate behavioral test patterns from faulty behavior instead of comparing the faulty and fault-free circuits based on a hybrid Boolean-word canonical representation called Horner expansion diagram. Since this set of polynomial functions express primary outputs and next states with respect to primary inputs and present states, it is not necessary to perform justification/propagation phase which leads to a minimum number of backtracks. It improves fault coverage and reduces test generation time over logic-level techniques. We assess then the effectiveness of high-level test generation with a simple gate-level automatic test pattern generation algorithm. Experimental results show robustness and reliability of our method compared to other contemporary approaches in terms of fault coverage and CPU time. Index Terms—Fault coverage, high-level test generation, Horner expansion diagram, polynomial modeling, sequential circuit testing.

I. Introduction

I

NCREASING in size and complexity of digital designs has made the manufacturing process more complicated and enforces more intricacy in verification of designs. This makes it essential to address critical verification issues at the early stages of design cycle. Such a complicated design needs to be tested for fabrication faults as well as functional faults. Several attempts have been made to raise the quality of testing methods with automatic test pattern generation (ATPG) and design for testability (DFT) methods in logic and lower levels. In spite of ATPG and DFT tools in low levels of abstraction, which are based on gate-level random techniques and did not follow the same progress to the higher levels of abstraction of the design, some attempts are done in higher levels. Although these techniques try to increase the testability of a circuit considerably, but there are always some overheads in area, power, and performance. Therefore, a suitable formal model is required to define fault models and develop test pattern generation techniques at behavioral level of abstraction or even higher. Manuscript received October 1, 2008; revised January 12, 2009 and September 14 2009. Current version published April 21, 2010. This paper was recommended by Associate Editor, R. D. (Shawn) Blanton. B. Alizadeh and M. Fujita are with Very-Large-Scale Integration Design and Education Center, University of Tokyo, Tokyo 113-0032, Japan (e-mail: alizadeh@ cad.t.u-tokyo.ac.jp; fujita@ ee.t.u-tokyo.ac.jp). M. Mirzaei is with the Department of Electrical Engineering, Sharif University of Technology, Tehran 11365-8639, Iran (e-mail: [email protected]). Digital Object Identifier 10.1109/TCAD.2010.2043571

In order to improve test of modern system on a chip (SoC) device, several automatic test pattern generation approaches have been proposed to detect manufacturing faults as well as functional ones. While ATPG is typically used for manufacturing testing, functional testing relies on binary decision diagram (BDD)-based methods and satisfiability (SAT) tools [1]–[4]. Since both BDD and SAT-based methods require the design to be flattened into bit-level, they cannot solve this problem efficiently, either in terms of memory or run time. This is why these methods are not scalable and cannot deal with large industrial benchmarks. In hybrid satisfiability approach (HSAT) in [5], which generates functional test vectors for register transfer level (RTL) designs, if variable assignments that satisfy the conjunctive normal form (CNF) clauses cause the linear programming constraints in the arithmetic domain to be infeasible, backtracking is needed to select another set of Boolean assignments. Although HSAT is able to model bit-level and word-level expressions, nonlinear operators such as integer multiplication should be decomposed into linear operators due to using integer linear programming. To do so, one of the operands needs to be expanded in terms of its bits. Word-level implications on arithmetic operators that are used in [6] are much weaker than provided by a generic constraint solving technique and the efficiency of this approach is limited by the heuristics to guide the constraint propagation part. Evolutionary and genetic algorithmic methods used in [7]– [9] are better than the fully random-based method used in state-of-the-art ATPG approaches. However, they are not very robust methods for generating test vectors for achieving highest coverage. Although these methods can get good coverage in small and some mean-size designs, their random nature made these methods so poor in large sequential circuits where the number of inputs and registers/flip-flops increases dramatically. A metric of fault coverage based on the data flow analysis was proposed in [29] which is suitable for design validation since data flow analysis is very efficient in detecting design faults. This metric evaluates the data flow coverage achieved by using a given sequence of test patterns. The proposed method is performed in three steps: 1) generation of the data flow graph from an hardware description language (HDL) description, 2) selection of a subset of paths to be executed, and 3) simulation of the HDL description with test patterns to determine what fraction of paths are executed. RTL ATPG algorithms in [10]–[12], [30] are based on a data structure named assignment decision diagram (ADD). In this

c 2010 IEEE 0278-0070/$26.00 

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.

738

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 29, NO. 5, MAY 2010

method, each element in the circuit is symbolically represented by ADD representation and then tested by justifying test vectors to their inputs from a primary input and propagating responses from their outputs to a primary output. Although justification and propagation is done symbolically on the ADD representation, it requires backtracking similar to the most ATPG methods. This can lead to an exponential increase in the complexity of the algorithm and its applicability to deep sequential circuits become impossible. In [13], we proposed a method based on a hybrid Booleanword-level canonical representation named Horner expansion diagram (HED) [14] to alleviate above-mentioned problems by generating high-level test vectors for combinational data paths. In this paper, we are going to extend this method to apply to sequential circuits as well as data paths. For this purpose, a word-level transition function of a sequential circuit is extracted and modeled by polynomial functions to define fault models and develop test pattern generation techniques at behavioral level of abstraction. The basic idea comes from the fact that HED is able to express the design in terms of integer equations in a formal model. In contrast to other approaches of test generation, it is not necessary to perform justification/propagation phase because our method is able to express primary outputs in terms of primary inputs and present state variables. As shown in the experimental results, our generated high-level test vectors achieve better performance on covering stack-at faults after computing the fault coverage at the gate-level. Experimental results show robustness and reliability of our method compared to other approaches in terms of fault coverage and CPU time. We present a high-level test generation technique without adding other functionality to the circuits as done in [15] which introduced a DFT method. Furthermore, in our approach, a set of polynomials is used to model sequential circuits and then, after injecting behavioral faults into the circuit, we utilize a canonical hybrid word-Boolean representation called HED to extract high-level test patterns for each injected fault. In this way, we are able to take into account the data path and controller parts of each design simultaneously. The method presented in [15], however, assumes that the data path and controller parts are separate and they start from an initial state and compute some test control-data flows, then test vectors are generated by justifying and propagating pre-computed test sets in the circuit from system inputs and propagating the output responses to system outputs. In addition, with HED all functional information in the design are concisely captured and used in ATPG. Besides, in our method it is not necessary to do justification/propagation and backtracking phases which decrease the processing time dramatically. In [24], a hierarchical level test generation technique is described. It targets one module at a time and makes use of a modified gate-level path oriented decision making (PODEM) algorithm to generate module level test vectors which are justified and propagated using an architectural level test generator. It has some drawbacks: 1) if this technique is applied to a large design, the module itself might be too complex for an ATPG tool to handle, and 2) the low-level structure of the module under test (MUT) should be available to the high-level. In

our methodology, however, we first generate architectural level test vectors which are converted to the gate-level ones using a gate-level PODEM-like algorithm to compute the gate-level fault coverage. In contrast to [24], the high-level architectural constraints are visible to low-level test computations, while the low-level structure of the circuit is not required. In [25] and [26], functional constraints on the MUT are extracted when one module at a time is targeted to reduce the complexity for test generation. In [26], which is an extension to the previous work [25], functional constraints are extracted incrementally and the constraints can be reused. If these techniques are applied to larger designs, the modules themselves might be too big for an ATPG tool to handle. Furthermore, if they are applied at a lower level of hierarchy, the surrounding logic may become too large making constraint extraction process very tedious and error prone. In our technique, we do not need to deal with this issue since we first generate high-level test vectors and then convert them to gatelevel ones. In addition, the method in [25] utilizes internal registers accessible through primary inputs/outputs which are referred to as primary input/output accessible registers (PIERs) to reduce the sequential depth during test generation using ATPG tools. Our methodology, however, makes use of a canonical hybrid decision diagram to represent the transition function of the circuit. In this way, the PIERs and their access mechanisms do not need to be defined during the architectural level specification. Therefore, internal structure of the design under test is not necessary to be available and there is no limitation for applying our method to a wide variety of digital circuits. In [27], a structural software-based self-test (SBST) methodology based on gate-level ATPG is proposed to automatically extract complex constraints. Statistical regression analysis is applied on the RTL simulation results using manually coded instruction templates, to derive a model of the surrounding logic of the MUT. The learned model is converted into virtual constrained circuit (VCC) followed by ATPG on the VCC–MUT in an iterative way. This methodology is based on knowledge of the processor’s instruction set architecture. Another drawback is that manual efforts are required for the selection and construction of a set of representative test program templates among a quite large space of possible templates. Furthermore, this structural SBST methodology can be only applied to combinational processor components without exploiting any possible regular structure of critical-totest combinational components. Moreover, it results in large test program sizes and, therefore, high test cost due to the fact that the test programs cannot be in a compact form since they are automatically generated using test program templates. Moreover, while the use of regression analysis is effective in establishing word-level correspondences, it fails when the actual mapping function is a relational operator, Boolean function, or involves bit-level manipulation of the settable fields. In [28], the authors describe a generic and systematic flow of SBST application on two communication peripheral cores. The methodology achieves high-fault coverage but needs a deep knowledge of the peripheral core leading to long test development time with a high-human effort.

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.

ALIZADEH et al.: COVERAGE DRIVEN HIGH-LEVEL TEST GENERATION USING A POLYNOMIAL MODEL OF SEQUENTIAL CIRCUITS

In summary, the methods presented in [24]–[28] make use of a gate-level ATPG tool to generate module level test vectors when the rest of the design imposes some constraints on the inputs/outputs of the MUT. Then the patterns generated are translated to the chip level test programs. In our methodology, however, we directly generate functional test patterns at a higher level of abstraction which can be described as a test program. After generating functional test patterns, in order to evaluate their quality, we need to convert functional tests to the gate-level ones and then report the stuck-at fault coverage. We have employed a gate-level ATPG to convert functional test patterns to the logic test vectors. It is worth noting that in [31] we have used satisfiability modulo theory (SMT) solvers for converting high-level test patterns to the bit-level test vectors. In addition, in our methodology, the process of generating high-coverage functional test is scalable and also applicable to any digital circuit at low turn-around times, while the methods discussed in [24]–[27] only deal with chip-level test generation for processors. Furthermore, these methods require a custom ATPG tool and synthesis of the complete design. The rest of this paper is organized as follows. Section II, describes HED as a hybrid canonical Boolean-word level representation. Our coverage driven test generation approach is discussed in Section III. Experimental setup and results with a brief conclusion are given in Sections IV and V, respectively. II. Hybrid Canonical Representation We first introduce our graph-based representation called HED [14] for functions with a mixed Boolean and integer domain and an integer range to represent arithmetic operations at a high-level of abstraction, while other proposed word-level decision diagrams [16] are graph-based representations for functions with a Boolean domain and an integer range. In HED, functions to be represented are maintained as a single graph in canonical form. We assume that the set of variables is totally ordered and that all of the vertices constructed obey this ordering. Maintaining a canonical form requires obeying a set of conventions for vertex creation as well as weight manipulation. HED is a binary graph-based representation which supports polynomial function by factorizing variables using Horner expansion recursively as shown in (1), where const is a term which is independent of variable X, while linear is another term which is served as the coefficient of variable X 

F (X, . . .) = F (X = 0, . . .) + X × [F (X = 0, . . .) + · · ·] = const + X × linear.

(1)

Definition 1: HED is a directed acyclic graph G = (VR, ED) with vertex set VR and edge set ED. While the vertex set VR consists of two types of vertices: constant (C) and variable (V ), the edge set only indicates integer values as weight attribute. A constant node v has a value val(v) ∈ Z as its attribute. A Variable node v has as attributes an integer variable var(v) and two children const(v) and linear(v) ∈ {V, C}. According to the above definition, a vertex v in the HED denotes an integer function f v that has been defined recursively

739

Fig. 1. HED representation of 24 − 8Z + 12YZ − 6X2 − 6X2 Z. (a) Decomposition with respect to X. (b) Decomposition with respect to X2 and Y . (c) Decomposition with respect to Z.

as follows. 1) If v ∈ C (is a constant node), then f v = val(v). 2) If v ∈ V (is a variable node), then f v = const(v) + val(v) × linear(v). To normalize the weights, any common factor is extracted by taking the greatest common divisor (GCD) of the argument weights. In addition, we adopt the convention that the sign of the extracted weight matches that of the const part. This assumes that GCD always returns a nonnegative value. Once the weights have been normalized the #test is looked for an existing vertex or creates a new one. Similar to that of BDDs, each entry in the #Test is indexed by a key formed from the variable and the two children, i.e., const and linear parts. As long as all vertices are created, the graph will remain in canonical form. Example 1 (HED Representation): Fig. 1 illustrates how the following polynomial expression is represented by HED f (X, Y, Z) = 24 − 8Z + 12YZ − 6X2 − 6X2 Z. Let the ordering of variables be X, Y, Z. First the decomposition with respect to variable X is taken into account. As shown in Fig. 1(a), after rewriting f (X, Y, Z) = (24 − 8Z + 12YZ) + X ∗ (−6X − 6XZ) based on (1), const and linear parts will be 24 − 8 ∗ Z + 12 ∗ Y ∗ Z and −6 ∗ X − 6 ∗ X ∗ Z, respectively. The linear part is decomposed with respect to variable X again due to X2 term. After that, the decomposition is performed with respect to variable Y and then Z as shown in Fig. 1(b). In order to reduce the size of an HED, redundant nodes are removed and isomorphic subgraphs are merged. For this purpose the greatest common divisor of the argument weights are taken to figure out isomorphic subgraphs as well as redundant nodes. In Fig. 1(b), 24 − 8Z, 12Z, and −6 − 6Z are rewritten by 8[3 + Z ∗ (−1)], 12[0 + Z ∗ (1)], and −6[1 + Z ∗ (1)], respectively. In order to normalize the weights, GCD(8, 12) = 4 and GCD(0, −6) = −6 are taken to extract common factors. Finally, Fig. 1(c) shows the normalized graph where GCD(4, −6) = 2 is taken to extract common factor between out-going edges from X node. In this representation, dashed and solid lines indicate const and linear parts, respectively. Note that in order to have a simpler graph; paths to 0-terminal have not been drawn in Fig. 1(c).

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.

740

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 29, NO. 5, MAY 2010

Fig. 2. Arithmetic operations in HED. (a) Addition. (b) Subtraction. (c) Multiplication.

In this representation, basic arithmetic operators such as adders, subtractions, multipliers, divisors, shifters, and multiplexers are available that work for symbolic integer variables. In order to represent Boolean functions, logical bitwise operations including NOT, AND, and OR have been provided. A. Boolean Logic In order to have an integrated representation of Boolean and integer variables, the logical operations need to be supported as well as arithmetic expressions. In HED, Horner expansion with respect to a Boolean variable is carried out in each Boolean node to have a canonical hybrid representation that is as compact as possible too. The decompositions for NOT, AND, and OR logical operations utilized in HED are shown below ⇒ ⇒ ⇒

Z =∼ X Z = X and Y Z = X or Y

Z =1−X Z =X×Y Z = X + Y − X × Y.

(2)

B. Arithmetic Operations 1) Addition and Subtraction: Addition (Z = X + Y ) and subtraction (Z = X − Y ) can be represented canonically based on the following decompositions, as shown in Fig. 2(a) and (b): Z = X + Y = Y + X × 1) = [0 + Y (1)] + X × (1) Z = X − Y = −Y + X × (1) = [0 + Y × (−1)] + X × (1). 2) Multiplication: Multiplier (Z = X ∗ Y ) can be represented canonically based on the following decomposition, as shown in Fig. 2(c): Z = X × Y = 0 + X × (Y ) = 0 + X × [0 + Y × (1)]. 3) Division: We give an algorithm for computing the division on HED, whereas it is assumed the divisor is a constant integer number. The recursive algorithm of finding least common multiple (LCM) of two numbers is applied to find a common multiple for all divisors. After that, we rewrite the output function based on the following manipulations, where Z is the output signal: Z=

f 1 f2 + = m n

n GCD(m,n)

× f1 +

m GCD(m,n)

LCM(m, n)

× f2

Fig. 3.

Multiplexer in HED.

Z = 51 + 9X + 32X2 is considered as a new output variable: f1 f2 + → GCD(3, 4) = 1 , LCM(3, 4) = 12 Z= 4 3 3 × (5 + 3X) + 41 × (9 + 8X2 ) (15 + 9X)+(36 + 32X2 ) Z= 1 = 12 12 2 (51 + 9X + 32X ) = . 12 4) Left and Right Shift Operations: While shift left operator, , can be viewed as scalar multiplication, shift right operator, , can be modeled as a division by 2N . By supporting shift operations, we will be able to handle bitslicing and bit-masking in similar fashion. Also cyclic shifts can be modeled by adding the most significant bit to the equations. 5) Multiplexer: Although the multiplexer can be treated as Boolean logic, it is more efficient to consider it as an arithmetic operator if its inputs are word-level signals. Let X, Y, and Z be word-level variables and the selection signal s is a bit-level variable. Z = ITE(s, X, Y ) describes that, for s = 0, Z = X, otherwise Z = Y . Fig. 3 represents this arithmetic operation in HED, where variables are decomposed according to Horner expansion as follows: Z = (1 − s) × Y + s × X = [1 + Y × (−s)] + X × (s).

III. Coverage Driven Test Vector Generation Approach The goal of this section is to introduce a polynomial model of pipelined sequential circuits to generate high-level test sequences using HED. In this approach, although a data path oriented circuit can be directly represented in HED, a control oriented circuit needs to be taken into account as a wordlevel transition relation to be represented in HED. It should be noted that our approach is based on a pre-image computation process so that an initial state needs to be defined. In order to demonstrate the quality of generated high-level test vectors, we compute the fault coverage at the logic-level by converting the high-level test vectors to gate-level test patterns and applying them to the related logic circuits. A. Polynomial Model of Sequential Circuits

.

For example, consider f1 = 5 + 3X, f2 = 9 + 8X2 and Z = (f1 /4) + (f2 /3). We compute Z as follows and thus Z = 12 ×

In this section, we are going to show how to represent the control and data path sections of a circuit using a set of polynomial functions. Finding a set of algebraic functions that describe a given circuit in a finite state machine with

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.

ALIZADEH et al.: COVERAGE DRIVEN HIGH-LEVEL TEST GENERATION USING A POLYNOMIAL MODEL OF SEQUENTIAL CIRCUITS

data (FSMD) path model is called polynomial modeling of the circuit which is formally defined below. Note that an FSMD is a universal specification model, proposed in [17], that can represent all hardware designs. In theory, the RTL design can be formalized as Gajski’s FSMD model which is an extension of the finite state machine (FSM) model with the so-called register transfer operations, each of which can be considered as an assignment of value, computed as an expression over a set of register values to another register. The important point to be noted here is that our goal of defining the FSMD model is to save considerable front-end implementation effort since translating from RTL to an FSMD is a well-known synthesislike process. To do so, we utilize RTL generation automatic (GAUT) [18] as a high-level synthesis tool to obtain the FSMD model of the design in a semi-automatic way. In addition, we employ very high speed integrated circuit (VHSIC) hardware description language (VHDL) restructuring methodology presented in [19] in order to identify the controller and data path of a given design. After that, the data path and controller parts of the design are represented in HED based on a polynomial model which will be defined in this section. Finally, we will be able to generate high-level test patterns for each design under test as will be discussed in the rest of this paper. Definition 2: The FSMD is defined as an octet M = (PI, S, V, PO, U, A, ns, out), where PI S V PO U

A

transfunc outfunc

set of primary input signals; finite set of control states; set of storage variables; set of primary output signals; ={z ⇐ ejz ∈ PO ∪ V and e ∈ E} represents a set of storage or output assignments where E = {g(a, b, c, . . .)ja, b, c, . . . ∈ PI ∪ V } represents a set of arithmetic expressions over the set PI ∪ V of input and storage variables; ={R(p, q)jp, q ∈ E, and R is any arithmeticBoolean relation} represents a set of status signals as arithmetic relations between two expressions from the set E; S × A → S, is the state transition function; S × A → U, is the update function of the output and the storage variables.

The state transition function transfunc is a high-level expression which describes the next state variables with respect to the present state and primary input signals. This function is modeled by a polynomial function so that it can be easily represented in HED. Since our approach of high-level test generation needs to traverse the FSMD from primary outputs to primary inputs, a pre-image computation is necessary in order to compute what present state is reachable from a given next state. To do so, the next state variable in the polynomial model is replaced by the related value and finally a simple polynomial equation should be solved to find out the values of the present state and primary input signals. In order to describe an FSMD using HED, we need to express transfunc and outfunc with a set of polynomial functions. To do so, we first encode each Sk ∈ S with a polynomial Pn (ps, k) which is defined as follows.

741

Definition 3: Consider an FSMD which has n + 1 states S = (S0 , S1 , . . . , Sn ). Each state Sk ∈ S(0 ≤ k ≤ n) can be described as a polynomial Pn (ps, k) in (3), where ps ∈ S is a symbolic integer variable that indicates the present state n  (ps − i)

Pn (ps, k) =

i=0 n 

,

where i = k.

(3)

(k − i)

i=0

Obviously, if ps is set to k, then Pn (ps, k) is 1. If ps = m(0 ≤ m ≤ n and m = k), then (ps − m) term in (3) becomes 0 which results Pn (ps, k) = 0. In other words, the polynomial in (3) represents ps ≡ Sk . Based on Definition 3, the state transition function tranfunc(ps, I) and output function outfunc(ps, I) are expressed in terms of the present state and primary input variables as follows: n  transfunc(ps, I) = Pn (ps, k) × ftrans(ps, I, k) k=0

outfunc(ps, I) =

n 

Pn (ps, k) × fout(ps, I, k)

k=0

where ftrans(ps, I, k): S × PI × A → U and fout(ps, I, k): S × PI × A → U, extracted from the design, show operations in each state of the FSMD to generate next state and output functions, respectively. Example 2 (Polynomial Modeling): Consider an FSMD illustrated in Fig. 4. In order to describe the state transition function transfunc and output function outfunc in terms of the present state ps and primary input i, each state is encoded based on an initial state. In this example, sA as initial state, is encoded by one (001). Similarly, sB and sC are encoded as 2 (010) and 4 (100), respectively. The state transition function is extracted as described in (4). We thereby obtain the HED representation of transfunc as illustrated in Fig. 5(a), where the variable ordering is i > ps. Similarly, the output can be described as a polynomial function as shown in (5). Fig. 5(b) depicts the HED representation of out signal in terms of primary input and present state variables, where the variable ordering is X > Y > b1 > Z > b2 > ps transfunc(ps, i) = [(ps − 2)(ps − 4)/3][2] − [(ps − 1)(ps − 4)/2] × [(i)4 + (1 − i)2] + [(ps − 1)(ps − 2)/6][1]

(4)

outfunc(ps, i) = out(ps, i) = [(ps − 1)(ps − 2)/6] [(b2)C × Z + (1 − b2)(C − Z)] C = [(ps − 2)(ps − 4)/3][(b1)(X + Y ) + (1 − b1)(X − Y )]. (5) In this example, we utilize a one-hot encoding method to express the state machine with respect to the state variables. It should be noted that other encoding techniques such as gray encoding or integer encoding can be used easily. For example, if we want to use integer encoding, variables will be

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.

742

Fig. 4.

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 29, NO. 5, MAY 2010

Fig. 6.

HED representation for test generation.

Fig. 7.

Nonlinear term in HED representation.

FSMD model: Example 2.

level test vectors, while the computation time is O(1). In [13], we proposed this method for data path oriented designs and now, we are going to extend it to support sequential circuits as well. In the HED representation, each polynomial function is decomposed based on Horner expansion [14]. The key point that has to be appreciated in this decomposition is that the linear part indicates under what conditions the function will be dependent to the related variable [see (1)]. Now suppose the top variable is a faulty variable (FAULT ) as shown in (6), so the linear portion expresses conditions caused the FAULT variable is observed at the output. This property covers path sensitization completely, while no computation is needed to be done. Fig. 6 depicts the const and linear portions of the faulty design after decomposing with respect to variable FAULT f (FAULT , . . .) = const + FAULT × linear.

Fig. 5. HED representation: Example 2. (a) Next-state function (ns) in HED. (b) Output function (out) in HED.

sA = “000” = 0, sB = “001” = sA + 1, and sC = “010” = sA + 2. Although the polynomials for the state transition and output functions obtained from integer encoding method are similar to that of one-hot encoding method, the processing time of computing the coefficients in integer encoding is less than that of one-hot encoding. B. High-Level Test Generation Method After expressing the FSMD with integer equations, the next step in our test generation procedure is to obtain a set of highlevel test sequences using various coverage metrics. Although it seems that test generation problem needs to be described as an equivalence checking problem and then solved in a fashion similar to that in [20], we figured out a useful property of the HED representation that makes it very easy to generate high-

(6)

Obviously, the const part is the behavior of the circuit which is independent of the given fault, while the linear part is exploited as a set of test vectors which are formally represented in HED. Indeed, the HED representation has an inherent property of automatically separating fault-independent part from the faulty part of a faulty design. We will point out shortly how this property helps us to generate test vectors from the faulty specification. It is worth noting that if the injected fault results in a nonlinear term in HED representation, the linear part is just important to be considered. For example, consider Z = (const1 + FAULT 1) ∗ (const2 + FAULT 1) which is rewritten as follows: Z = const1 ∗ const2 + (const1 + const2) ∗ FAULT 1 +FAULT 12 = ConstChild + (LinearChild + FAULT 1) ∗ FAULT 1. After representing this expression in HED as shown in Fig. 7, we will see that the linear part, i.e., the coefficient of FAULT 1 variable, is important to express the faulty behavior modeled by FAULT 1 variable.

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.

ALIZADEH et al.: COVERAGE DRIVEN HIGH-LEVEL TEST GENERATION USING A POLYNOMIAL MODEL OF SEQUENTIAL CIRCUITS

Fig. 8.

743

High-level test generation approach.

In addition, in contrast to other approaches of test generation, it is not necessary to perform justification/propagation phase because HED is able to express primary outputs in terms of primary inputs. If we can find out some conditions that reach the specified fault to one of primary outputs, according to the above-mentioned property, we have already fulfilled path sensitization as well as justification phases all together. It should be noted that if fault propagation needs to go through storage variables, our method tries to symbolically manipulate the transition function, i.e., ns in the previous example, instead of duplicating the circuit by applying timeframe expansion. Hence, our approach can be applied to deep sequential circuits easily. Fig. 8 illustrates our proposed highlevel test generation algorithm. Behavioral specifications in HDL or even in high-level language like C (BS) as well as a fault list (FL) are treated as inputs to the algorithm. Each fault in the FL is characterized by the following properties: the name of the affected signal, the line of behavioral model, and the type of stuck-at fault. We define a fault model that comprises two kinds of faults: 1) bit failure including stuck-at less than (−FAULT ) and stuck-at greater than (+FAULT ) the fault-free value, and 2) condition failure indicates that each condition can be stuck-at true (causing to remove else execution path) or stuck-at false (causing to remove then execution path). The experimental results show that such a fault model provides a good correlation with gate-level fault models, i.e., stuck-at zero and one, while allowing a faster and simpler analysis than gate-level approaches. The test generation algorithm starts by selecting and then injecting a fault into the behavioral model (BS) manually. To do so, a variable FAULT is added to (subtracted from) the symbolic value of the related signal. After that the highest order of variables is set to the FAULT due to beneficial property of HED representation discussed earlier. If fault propagation needs to go through storage variables, the algorithm performs a pre-image computation until primary inputs reach. Finally, during the HED representation of different outputs, they are checked to see whether the linear child of FAULT variable is not zero. As soon as we can find such an output, the algorithm is succeeded to detect the fault. Hence the linear child of

Fig. 9. HED representation: Example 3. (a) Output function (out) in HED. (b) Next-state function (ns) in HED.

variable FAULT is returned as a set of test vectors that are formally represented in HED. Otherwise the fault is undetectable. Example 3 (HED Fault Representation): Consider the previous example again and suppose the stuck-at greater than faults (FAULT 1, FAULT 2, FAULT 3, and FAULT 4) have been injected in ADDER (X+Y ), SUB (X−Y ), MULT (C×Z), and SUB (C−Z) operations, respectively. Based on Definitions 2 and 3, the polynomial function in (7) expresses the out variable. Let the variable ordering be FAULT 1 > FAULT 2 > FAULT 3 > FAULT 4 > X > Y > Z > b1 > b2 > ps. After reordering the variables in (7), we thereby obtain (8). Fig. 9(a) illustrates the HED representation of out variable as a primary output as described in (8). The coefficients of FAULT i (i = 1–4) variables, i.e., the linear child, express under what conditions the specified fault can be observed at the output out. It should be noted that the const child of FAULT 4 variable represents the fault-independent part of the circuit for this fault out = [(ps − 1)(ps − 2)/6][(b2)(C × Z + FAULT 3)+(1−b2) × (C − Z+FAULT 4)] C = [(b1)(X + Y + FAULT 1)+(1 − b1)(X − Y + FAULT 2)] (7) out = FAULT 1×(TP1)+FAULT 2×(TP2)+FAULT 3×(TP3) +FAULT 4×(TP4)+FFB. (8) In (8), TPi and FIP stand for TestPattern for FAULT i and FaultIndependentPart, respectively, and their equations are as follows: TP1 = [Z × b1 × b2 + b1(1 − b2)]×[(ps − 1)(ps − 2)/6] TP2 = [Z × [(1 − b1)b2]+(1 − b1)(1 − b2)] [(ps − 1)(ps − 2)/6]

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.

744

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 29, NO. 5, MAY 2010

Fig. 10.

HED representation: Example 4.

Fig. 11.

High-level to gate-level test conversion.

TP3 = [b2] × [(ps − 1)(ps − 2)/6] TP4 = [1 − b2] × [(ps − 1)(ps − 2)/6] FIP = [X × [Y × (Z × (1 − b1)b2 + (1 − b1)(1 − b2)) + Z × b1 × b2(1 − b2)] + Y × [Z × b1 × b2 × (1 − b2)] − Z × (1 − b2)] × [(ps − 1)(ps − 2)/6]. (9) The stuck-at zero and one faults (FAULT 5 and FAULT 6) of the input variable i have been injected into transfunc. Let the variable ordering be FAULT 5 > FAULT 6 > i > ps. We first describe the polynomial function of transfunc in (10). After that, Fig. 9(b) illustrates the HED representation of transfunc according to (11), where TP5, TP6, and FIP ns stand for TestPattern for FAULT 5, FAULT 6, and FaultIndependentPart, respectively, and their equations are given in (12). The coefficients of FAULT 5 and FAULT 6 variables express under what conditions the state machine has an incorrect transition transfunc = [(ps − 2)(ps − 4)/3][2] − [(ps − 1)(ps − 4)/2] × [(i) × (4 − FAULT 5) + (1 − i) × (2+FAULT 6)] + [(ps−1)(ps−2)/6][1] (10)

Fig. 12.

RTL schematic: Example 5.

Fig. 13.

Fault simulation algorithm.

transfunc = FAULT 5 × (TP5) + FAULT 6 × (TP6) + FFB− ns (11) TP5 = [(ps − 1)(ps − 4)/2](i) TP6 = −[(ps − 1)(ps − 4)/2](1 − i) FIP ns = [(ps − 2)(ps − 4)/3][2] − [(ps − 1)(ps − 4)/2] × [(i)4 + (1 − i)2] + [(ps − 1)(ps − 2)/6][1]. (12) TP5 indicates that if the input variable i becomes 1, while the present state is ps = 2 (sB in Fig. 4), stuck-at-zero on i will be detectable due to having a transition to ps = 2 instead of ps = 4 (sC in Fig. 4). Each TPi in (9) and (12) indicates that the injected faults will be detected if at least one of conditions can be satisfied. Otherwise, TPi becomes 0 and, therefore, that fault will be undetectable. Example 4 (Condition Failure): For example, consider the data path part of FSMD in Fig. 4 again. Suppose that a stuckat true fault (FAULT 7) has been injected on the Boolean variable b1. Due to the stuck-at true fault definition, the condition-part of the first if-then-else statement is always true and, therefore, the else-part will be removed. The HED representation of the primary output out appears in Fig. 10, where TestPattern is (X + Y )[Z · b2 + (1 − b2)] + Z(1 − b2).

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.

ALIZADEH et al.: COVERAGE DRIVEN HIGH-LEVEL TEST GENERATION USING A POLYNOMIAL MODEL OF SEQUENTIAL CIRCUITS

TABLE I

TABLE III

Benchmark Characteristics for HED-Based Approach

HED-Based Fault Coverage Computation for Different Implementations

Bench- VHDL mark Spec. #Lines b01 110 b02 70 b03 141 b04 102 b05 332 b06 128 b07 92 b08 89 b09 103 b10 167 b11 118 b12 569 b13 296 b14 509 b15 671

745

GATE Level Spec. Benchmark #PI 2 1 4 11 1 2 1 9 1 11 7 5 10 32 36

#PO 2 1 4 8 36 6 8 4 1 6 6 6 10 54 70

#CG 42 23 149 797 812 36 415 165 140 216 732 1529 307 5392 6678

#FF 5 4 30 66 34 8 41 21 28 17 31 119 49 215 263

#TF 212 106 638 3804 3890 160 1948 742 670 988 3424 7160 1438 24 978 29 512

#CF 126 59 354 2200 2199 88 1105 404 382 541 1942 4047 807 14 132 24 255

b01 b02 b03 b04 b05 b06 b07 b08 b09 b10 b11 b12 b13 b14 b15

TABLE II

Area Optimization F.C. Time (%) (s) 100 77.5 100 3.7 76.45 33.2 99.95 72.5 90.15 102.5 98.86 1.5 88.42 106.6 99.75 155.5 90.42 21.5 99.9 3.4 99.95 70.1 90.95 353.2 99.88 487.1 99.99 1175.5 81.33 1851.3

Delay Optimization F.C. Time (%) (s) 100 77.5 100 3.6 75.72 32.5 99.95 72.5 89.95 97.3 98.86 1.4 88.42 106.6 99.75 155.5 90.42 21.5 99.82 2.8 99.95 70.1 89.98 350.8 99.88 487.1 99.99 1172.5 79.99 1807.8

Experimental Results of HED Representation Benchmark

Time (s)

b01 b02 b03 b04 b05 b06 b07 b08 b09 b10 b11 b12 b13 b14 b15

0.12 0.10 0.08 0.05 0.10 0.11 0.18 0.10 0.09 0.15 0.07 0.11 0.10 0.19 0.21

Memory (Mbytes) 0.25 0.18 0.15 0.14 0.22 0.25 0.30 0.23 0.20 0.29 0.18 0.28 0.25 0.33 0.38

#EQ

#Var

36 21 43 17 39 46 54 26 42 126 80 167 113 179 267

55 29 93 40 71 62 78 42 52 145 85 227 187 253 326

This test-pattern describes three sets of test vectors that can be used to detect the stuck-at true fault on the variable b1. The first set of test vectors is X + Y = 0, Z = 0, and b2 = 0, and the second one is X + Y = 0 and 1 − b2 = 0, and the third one is Z = 0 and (1 − b2) = 0. C. Gate-Level Test Conversion Although functional tests generated at behavioral level are effective in traversing through much of the control space of a digital circuit, exercising all values of variables is not possible due to the large number of possible values. Hence, for testing arithmetic operators, after generating some constraints in finding values that exercise potential faults, a gate-level approach may be more effective. We employ a gate-level ATPG to generate a test pattern targeting structural faults in the corresponding functional unit based on algorithm in Fig. 11. For this purpose, each time we select a high-level fault from FL and assume that it is propagated to a primary output according to its test pattern in TP list. We use functional

methods based on the pre-image computation techniques to compute justification sequences. We assume that every FSMD has an initial state and pre-image computation is continued until the initial state is reached. Note that we do not assume that the circuits have synchronizing sequences. According to different conditions generated for this fault, we assign some values to the related variables in each time slice to generate a sequence of gate-level test vectors. Although we utilize a PODEM-like algorithm, any other gate-level test pattern generation algorithm can be applied. Moreover, instead of using ATPG tools, we are able to take advantage of SMT Solvers to solve a list of equations extracted from the highlevel test patterns [31]. Example 5 (Stuck-at Gate-Level Test Extraction): Consider the behavioral test pattern TP1, generated for FAULT 1 in Example 3. This test shows that in order to detect a fault behavior in ADDER operation which provides X + Y , it should satisfy one of the following conditions: (b2) = 0, (Z) = 0, (ps − 1)(ps − 2) = 0 (1 − b2) = 0, (ps − 1)(ps − 2) = 0. (13) Fig. 12 depicts a RTL schematic of the FSMD in Example 2. To extract gate-level test vectors for condition 1 in (13), first of all we consider state conditions, i.e., (ps − 1)(ps − 2) = 0 to be satisfied, and then we decide in which time slice other conditions, i.e., b1 = 0, b2 = 0, and Z = 0, should be satisfied. According to the state encoding approach described in Section A, (ps − 1)(ps − 2) = 0 will be satisfied if ps is set to 4, i.e., sC state in Fig. 4. On the other hand, in order to reach to this state from the initial state sA, we need to recursively apply the pre-image computation to the state transition function until the initial state, i.e., sA, is reached. Therefore, to propagate FAULT 1 to the primary output out, at time slice t0 + 1, variables b2 and Z should be set to one and a nonzero value, respectively, at time slice t0 . In this way, 1) 2)

(b1) = 0, (b1) = 0,

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.

746

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 29, NO. 5, MAY 2010

TABLE IV Comparing Fault Coverage of HED-Based Test Generation Method With ARTIST, RAGE99, and ARPIA ARTIST

Benchmark

b01 b02 b03 b04 b05 b06 b07 b08 b09 b10 b11 b12 b13 b14 b15

F.C. (%) 100 99.33 74.33 89.42 33.50 97.02 57.53 86.27 81.33 90.42 85.98 45.99 68.37 79.65 31.96

#Test 1061 940 374 427 2800 62 461 329 1187 586 532 5541 4538 4743 2733

RAGE99 Time (s) 4118 1731 5131 6905 33 393 2315 2251 2106 9054 10 851 5092 67 575 43 450 55 240 60 990

F.C. (%) 100 99.33 73.48 80.18 40.26 93.71 58.28 95.21 81.89 90.89 91.11 20.86 84.49 79.01 38.11

#Test 259 114 174 83 68 125 351 1005 958 364 1222 155 3303 4597 2838

ARPIA Time (s) 549.0 51.61 69.36 518.4 2583.2 65.56 1150.8 1238.6 655.8 152.6 1741.0 1231.7 5531.4 9722.9 11 566.9

Reg− C = Reg− b ∗ Z is computed by MUL unit in Fig. 12. After doing the first pre-image computation, the present state sB (Reg− B in Fig. 12) is reached where the primary input i, i.e., the select bit of Mux2, should be set to one at time slice t0 − 1. Performing the second pre-image computation reaches us to the initial state sA while variable b1 should be set to one at time slice t0 − 2. It causes the ADDER unit, the faulty operation, computes Reg− A = X + Y . In this example, since there is no constraint on X and Y variables, all combinations of their values are applied to the ADDER unit when the conditions for different time slices are fixed. Note that in contrast to contemporary methods of sequential circuit test generation, our method does not need to have different copies of a circuit for different time slices. Instead, in our method the circuit is symbolically described in HED. D. Gate-Level Fault Simulation In order to evaluate the generated test vectors, we fulfill fault simulation as shown in Fig. 13, where GT and CFL are gate-level test vectors and collapsed FL, respectively. For this purpose, we provide a library of primitive gates which consists of two models for each gate; one model for normal operation of a gate and the other one for faulty behavior of the related gate. A synthesis tool is used to obtain a gate-level implementation of the circuit given in RT-level, when synthesis library is set to our developed library. As a result, two models of each circuit are provided; One model is a list of “fault-independent results” which is provided by simulating all tests in the test vector list (GT) and represents normal behavior where all gates work properly. The second one indicates a faulty behavior of the circuit, while in each simulation run, one of the gates works based on its faulty responses. We run simulation for each circuit according to the number of collapsed faults in CFL and one time more for fault independent model to compute the coverage of stackat faults. Note that, our methodology is independent of the fault simulation technique and any other fault simulator can be utilized.

F.C. (%) 100 99.33 74.82 90.44 33.43 97.02 58.28 91.39 85.33 91.18 91.14 20.92 82.56 81.78 32.50

#Test N.A. N.A. N.A. N.A. N.A. N.A. N.A. N.A. N.A. N.A. N.A. N.A. N.A. N.A. N.A.

HED Time (s) 78.8 39.1 1089.9 1627.8 1932.2 200.3 9297.2 2832.3 4970.5 778.3 34 837.1 7890.2 2801.8 473 741.9 590 611.3

F.C. (%) 100 100 75.72 99.95 89.95 98.86 88.42 99.75 90.42 99.82 99.95 89.98 99.88 99.99 79.99

#Test 315 175 146 107 216 63 228 138 104 60 114 756 896 1023 840

Time (s) 77.5 3.6 32.5 72.5 97.3 1.4 106.6 155.5 21.5 2.8 70.1 350.8 487.1 1172.5 1807.8

IV. Experimental Setup and Results In this section, we report some preliminary experimental results that show the robustness of the HED-based test generation compared to other methods. We have implemented the algorithms in C++ and carried out on an Intel 933 MHz Processor and 256 MB of memory running Windows XP. The effectiveness of the proposed approach has been evaluated on 1999 International Test Conference VHDL benchmarks (ITC99) [21]. General information about the benchmark circuits are given in Table I. Column Benchmark gives the benchmark’s name, whereas column VHDL Spec provides the number of lines in RT-Level representations. In columns GATE Level Spec., an idea about size of the circuits, number of primary inputs (PI), primary outputs (PO), combinational gates (CG), flipflops (FF ), total faults (TF ) and collapsed faults (CF ) are given, respectively. We use a line oriented structural fault collapsing method presented in [22] to provide collapsed FL. Experimental results have been compared with ARTIST [7], ARPIA [8], and RAGE99 [9] which are behavioral RTL ATPG tools based on evolutionary and genetic algorithm, respectively. ARTIST and RAGE99 were run on a Sun Ultra 5 running at 333 MHz with 256 MB of memory and ARPIA was run on a Sun Enterprise 250 running at 400 MHz and equipped with 2 GB of memory. In order to represent a design in HED, first of all we utilize GAUT [18] as a high-level synthesis tool to obtain the FSMD model of the design in a semiautomatic way. After that, the data path and controller parts of the design are represented in HED based on the polynomial model defined in Section III-A. Table II reports the results of running HED package for each benchmark. The CPU time, memory usage, the number of equations and the number of variables needed to generate each high-level test pattern have been reported in columns Time in seconds, Memory in MB, #EQ, and #Var, respectively. The test patterns generated by HED are simulated at the gate-level when the stuck-at faults are considered. Table III tabulates the results of running our approach on some benchmarks where

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.

ALIZADEH et al.: COVERAGE DRIVEN HIGH-LEVEL TEST GENERATION USING A POLYNOMIAL MODEL OF SEQUENTIAL CIRCUITS

different optimizations are considered to synthesize the RTL code. Table IV summarizes the preliminary results of our method in comparison with ARTIST, ARPIA, and RAGE99. In this table, columns F.C. and Time indicate the fault coverage and the CPU time required by fault simulator to report the related fault coverage. Although column #Test shows the number of test vectors reported by methods in Table IV, this information was not accessible for ARPIA approach. Although HED package was run on Windows XP and other methods were run on Sun platforms, in order to have a fair comparison of their processing times, we have used the benchmarks in [23] which evaluate the performance of different platforms. It is reported in [23] that an Intel 933 MHz processor running Windows XP is about two times faster than other platforms mentioned in this paper. In other words, the CPU times reported in column HED of Table IV should be multiplied by 2 in order to be comparable with other CPU times tabulated in Table IV. For example, in circuit b05, ARTIST achieved 33 393 s, ARPIA achieved 1932.2 s, and RAGE99 achieved 25 832.2 s, while our method achieved 97.3 s which needs to be multiplied by 2 since our platform is almost two times faster than other platforms. Obviously, the run times (column Time in Table IV) for test generation using HED for all circuits reported in our experiment, are almost two orders of magnitude lower than those of reported using other methods. Therefore, this gives us ways to use the proposed approach to get coverage driven high-level test generation ability within practical time. V. Summery and Conclusion In this paper, a high-level test generation method has been introduced which models sequential circuits with a set of polynomial functions and it generates behavioral test patterns from faulty behavior using a hybrid canonical representation namely HED. After that, fault coverage at the logic-level is computed by converting the high-level test vectors to gate-level test patterns and applying them to the related logic circuits. We have applied our approach to some standard benchmarks which cover Boolean logics as well as arithmetic expressions. Preliminary experimental results show the efficiency even on large circuits. The limitation of our method is that the extraction of polynomial functions from the behavioral description and converting the high-level tests to the gate-level ones are performed manually which are necessary to be done automatically. References [1] F. Ferrandi, F. Fummi, and D. Sciuto, “Implicit test generation for behavioral VHDL models,” in Proc. Int. Test Conf. (ITC), 1998, pp. 587–596. [2] K.-T. Cheng and A. S. Krishnakumar, “Automatic generation of functional vectors using the extended finite state machine model,” Assoc. Comput. Machinery Trans. Design Automat. Electron. Syst., vol. 1, no. 1, pp. 57–79, 1996. [3] M. K. Ganai, A. Aziz, and A. Kuehlmann, “Enhancing simulation with BDDs and ATPGs,” in Proc. Design Automat. Conf. (DAC), 1999, pp. 385–390. [4] P. H. Ho, T. Shiple, K. Harer, J. Kukula, R. Damiano, V. Bertacco, J. Taylor, and J. Long, “Smart simulation using collaborative formal and simulation engines,” in Proc. Int. Conf. Comput.-Aided Design (ICCAD), 2000, pp. 120–126.

747

[5] F. Fallah, S. Devadas, and K. Keutzer, “Functional vector generation for HDL models using linear programming and 3-satisfiability,” in Proc. Design Automat. Conf. (DAC), 1998, pp. 528–533. [6] C.-Y. Huang and K.-T. Cheng, “Using word-level ATPG and modular arithmetic constraint-solving techniques for assertion property checking,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 20, no. 3, pp. 381–391, Mar. 2001. [7] F. Corno, M. S. Reorda, and G. Squillero, “ITC’99 benchmarks and first ATPG results,” IEEE Design Test Comput., vol. 17, no. 3, pp. 44–53, Jul.–Sep. 2000. [8] G. Cumani, “High-level test of electronic systems,” PhD dissertation, Dipartimento di Automatica e Informatica, Politecnico di Torino, Torino, Italy, 2003. [9] F. Corno, M. Sonza Reorda, and G. Squillero, “High-quality test pattern generation for RT-level VHDL descriptions,” in Proc. 2nd Int. Workshop Microprocessor Test Verification (MTV) Common Challenges Solutions, Sep. 1999, pp. 119–124. [10] I. Ghosh and M. Fujita, “Automatic test pattern generation for functional RTL circuits using assignment decision diagrams,” IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., vol. 20, no. 3, pp. 402–415, Mar. 2001. [11] L. Zhang, M. S. Hsiao, and I. Ghosh, “Automatic design validation framework for HDL description via RTL ATPG,” in Proc. Asian Test Symp. (ATS), 2003, pp. 148–153. [12] L. Zhang, I. Ghosh, and M. Hsiao, “Efficient sequential ATPG for functional RTL C,” in Proc. Int. Test Conf. (ITC), 2003, pp. 290–298. [13] B. Alizadeh and M. Fujita, “High-level test generation without ILP and SAT solvers,” in Proc. Int. Workshop High-Level Design Validation Testing (HLDVT), 2007, pp. 298–304. [14] B. Alizadeh and M. Fujita, “HED: A canonical and compact hybrid word-Boolean representation as a formal model for hardware/software co-designs,” in Proc. Int. Workshop Constraints Formal Verification (CFV), 2007, pp. 15–29. [15] I. Ghosh, A. Raghunathan, and N. K. Jha, “A design for testability technique for RTL circuits using control/data flow extraction,” in Proc. Int. Conf. Comput.-Aided Design (ICCAD), 1998, pp. 329–336. [16] B. Becker, R. Drechsler, and R. Enders, “On the representational power of bit-level and word-level decision diagrams,” in Proc. Asia South Pacific-Design Automat. Conf. (ASP-DAC), 1997, pp. 461–467. [17] D. Gajski and L. Ramachandran, “Introduction to high-level synthesis,” IEEE Trans. Design Test Comput., vol. 11, no. 4, pp. 44–54, Oct. 1994. [18] E. Martin, O. Sentieys, H. Dubois, and J. L. Philippe, “GAUT: An architectural synthesis tool for dedicated signal processors,” in Proc. Eur. Design Automat. Conf. (EURO-DAC), 1993, pp. 14–19. [19] D. Corvino, I. Epicoco, F. Ferrandi, F. Fummi, and D. Sciuto, “Automatic VHDL restructuring for RTL synthesis optimization and testability improvement,” in Proc. Int. Conf. Comput. Design (ICCD), 1998, pp. 436–441. [20] B. Alizadeh and M. Fujita, “A hybrid approach for equivalence checking between system level and RTL descriptions,” in Proc. Int. Workshop Logic Synthesis (IWLS), 2007, pp. 298–304. [21] S. Davidson, ITC99 Benchmarks, 1999 [Online]. Available: http://www.cad.polito.it/tools/itc99.html [22] M. Nadjarbashi, Z. Navabi, and M. Movahedin, “Line oriented structural equivalence fault collapsing,” in Proc. Int. Workshop Model Test, 2000, pp. 45–50. [23] J. L. Henning, Standard Performance Evaluation Corporation (SPEC), 2006 [Online]. Available: http://www.spec.org/cpu [24] J. Lee and J. H. Patel, “Hierarchical test generation under intensive global functional constraints,” in Proc. Design Automat. Conf. (DAC), 1992, pp. 261–266. [25] R. S. Tupuri, A. Krishnamachary, and J. A. Abraham, “Test generation for gigahertz processors using an automatic functional constraint extractor,” in Proc. Design Automat. Conf. (DAC), 1999, pp. 647– 652. [26] V. M. Vedula and J. A. Abraham, “A novel methodology for hierarchical test generation using functional constraint composition,” in Proc. Int. Workshop High-Level Design Validation Testing (HLDVT), 2000, pp. 9–14. [27] L. Chen, S. Ravi, A. Raghunathan, and S. Dey, “A scalable softwarebased self-test methodology for programmable processors,” in Proc. Design Automat. Conf. (DAC), 2003, pp. 548–553. [28] A. Apostolakis, M. Psarakis, D. Gizopoulos, and A. Paschalis, “A functional self-test approach for peripheral cores in processor-based SoCs,” in Proc. Int.On-Line Testing Symp., 2007, pp. 271–276.

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.

748

IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. 29, NO. 5, MAY 2010

[29] Q. Zhang, and I. G. Harris, “A data flow fault coverage metric for validation of behavioral HDL descriptions,” in Proc. Int. Conf. Comput. Aided Design (ICCAD), 2000, pp. 369–373. [30] L. Lingappan, S. Ravi, and N. K. Jha, “Test genreation for nonseparable RTL controller-datapath circuits using a satisfiability based approach,” in Proc. Int. Conf. Comput. Design (ICCD), 2003, pp. 187–193. [31] B. Alizadeh, and M. Fujita, “Guided gate-level ATPG for sequential circuits using a high-level test generation approach,” in Proc. Asia South Pacific-Design Automat. Conf. (ASP-DAC), 2010, pp. 425–430.

Bijan Alizadeh received the B.S., M.S., and Ph.D. degrees in computer engineering from the University of Tehran, Iran, in 1995, 1998, and 2004, respectively. From 2005 to 2008, he was with Sharif University of Technology, Tehran, Iran. Since 2009, he has been with Very-Large-Scale Integration Design and Education Center, University of Tokyo, Tokyo, Japan, where he is currently a Postdoctoral Research Fellow. His current research interests include the development of better algorithms and heuristics for improving the efficiency of formal verification and high-level synthesis tools. He is also researching the use of word level decision diagrams and SMT solvers in generating high-level test patterns which improve ATPG gate-level fault coverage.

Mohammad Mirzaei received the B.S. degree in electrical engineering from the Khajeh Nasir Toosi of Technology, Tehran, Iran, in 2005, and the M.S. degree in digital electronic systems from Sharif University of Technology, Tehran, Iran, in 2007. He is currently pursuing the Ph.D. degree in digital electronic systems at Sharif University of Technology. During his M.S. study, he worked on the application of word-level canonical decision diagrams to formal verification of arithmetic circuits. His current research interests include the development of highlevel design tools with emphasis on testing and design for testability. Masahiro Fujita received the B.S. degree in electrical engineering, and the M.S. and Ph.D. degrees in information engineering from the University of Tokyo, Tokyo, Japan in 1980, 1982 and 1985, respectively. In 1985, he was with Fujitsu Laboratories Ltd., Nakahara, Kawasaki, Japan. From 1993 to 2000, he was the Director of the Computer Aided Design Research Group with Fujitsu’s U.S. Research Office, Sunnyvale, CA. In March 2000, he became a Professor with the Department of Electronic Engineering, University of Tokyo, and he is currently a professor with Very-Large-Scale Integration Design and Education Center within the same university. He has been involved in many research projects on various aspects of formal verification. His current research interests include verification and synthesis in high level and system level designs.

Authorized licensed use limited to: UNIVERSITY OF TOKYO. Downloaded on April 30,2010 at 04:25:20 UTC from IEEE Xplore. Restrictions apply.